Get A Quote
Written by Pierre Berteloot on 15 May 2023

Can ChatGPT be trusted in the relevance and quality of its answers?

ChatBot such as ChatGPT have become indispensable in the daily lives of many people. These tools allow users to easily generate content by asking specific questions or entering keywords. However, it should not be forgotten that these tools are still relatively new and the technology based on algorithmic processing is often imperfect. 

Indeed, one of the main challenges with the use of ChatGPT is, as stated above, that its answers are based on algorithmic processing. ChatGPT can naturally produce inaccurate or incomplete or even false answers. Users should therefore always verify the information provided by ChatGPT before using or publishing it.

The inherent problem with using ChatGPT and the quality of the information given is centralization. In other words, ChatGPT produces a single response channel (it’s a one and only chat with the tool) that prevents comparing responses with different sources. With a single distribution channel we have only one answer. If this answer is incorrect or wrong ChatGPT users may misunderstand. On the other hand, when we use a traditional search engine, we have several channels to check the quality of the information presented to us. Indeed, we can check the quality of our research to the reputation of the site or the name of the expert concerned. ChatGPT does not cite its sources and its response is not immediately verifiable because of this centralization in its response. 

Of course, it is possible to ask ChatGPT to cite its sources. And this is where another major drawback arises.  ChatGPT tends to create HTML links to pages on a website that don't exist. This can happen when ChatGPT identifies relevant key terms or concepts in a response and attempts to link to a relevant source. However, if the source does not exist, the link may be wrong or unnecessary for the user.

The question then becomes: why is ChatGPT able to create fake HTML links? The answer lies in how ChatGPT's algorithm works. ChatGPT is a natural language processing model based on machine learning. He was trained on a large textual data set to learn how to predict the next words in a given text sequence. When a user asks a question or enters keywords in ChatGPT, the algorithm looks for similar text sequences in its training dataset and tries to predict the most likely answer based on the available information. If ChatGPT identifies key terms in a response and attempts to link to a relevant source, it can sometimes link to an inappropriate or non-existent source.

Here is a concrete example:

When we ask ChatGPT to give us 3 decisions from the Office of the Privacy Commissioner of Canada, the tool is able to give us 3 decisions that did exist and the description is consistent with the reality of the decisions rendered by the supervisory authority.

 However, when we ask ChatGPT to cite its sources and give a link to the decisions here are the links given:

When clicking, we immediately notice an error message telling us that the page does not exist. Inevitably, for the informed and conscientious user this poses major problems. Can we really trust a tool that is unable to cite its sources? Can I consciously disseminate information knowing that ChatGPT does not allow me to verify the veracity of my words?

In conclusion, using ChatGPT to generate content can be a useful tool for users, but it is important to understand the limitations of this tool and use it responsibly. Users should always check the information provided by ChatGPT before using or publishing it, and be aware of the risks associated with HTML links created by ChatGPT and the inability to verify sources.

Related Posts