Get A Quote
Written by Pierre Berteloot on 27 March 2023

Why it might be dangerous to use ChatGPT in your company?

ChatGPT is a Chatbot tool from the company OpenIA that needs no introduction. Indeed this AI is a real jewel of technology and its even more powerful version (ChatGPT 4.0) was announced recently. However, although ChatGPT can be a tool that can improve the productivity of employees, if its use is not mastered, it can present security risks for the company that does not use it with care.

 

Contextual information

Indeed, this is what a report from CyberHaven, a California-based cybersecurity company, reports that 2.3% of employees have copied and pasted confidential data from their organization onto ChatGPT. The conclusion of this report is stark. The use of the tool poses high security risks to organizations' "sensitive" data and personal information.

 

More details and answers

ChatGPT is an AI that follows the principle of "deep learning" which means that the AI must feed itself with a large volume of data to optimize and learn. In other words, the more data inputs the AI will have (in its chat box and data that is freely available on the internet) the better it should perform and learn exponentially. 

The problem is that some employees use ChatGPT unconsciously and decide to include confidential and company-sensitive data. So, according to the "deep learning" principle, the AI will be likely to reuse the data and disclose it to whoever asks a question. As AI is constantly learning, it is possible that confidential information is considered legitimate to be transmitted to everyone despite its sensitive nature. It is also very complicated - and this can be explained by the colossal volume of data that the tool collects - to know which data will be reused and if the reuse of this data will be relevant.

That's why CyberHeven gives the example of a doctor who doesn't want to write up his medical conclusions and asks ChatGPT to do it for him. In his request, he specifies the patient's name and illness. It would then be theoretically possible for someone to ask ChatGPT "what disease does patient X have?" and it is entirely possible that ChatGPT could answer this question. 

When ChatGPT is asked the question, and if it is possible that ChatGPT inadvertently transmits sensitive data, here is its answer: 

"in most cases, the data used for my training is anonymized or made non-identifiable to protect the privacy of individuals. However, it is always possible for mistakes to occur and sensitive information to be inadvertently disclosed."

Sharing personal information at ChatGPT also raises many privacy issues. Entering personal information into ChatGPT constitutes a transfer of data to a third party without the individual's consent and without clear and transparent information being provided. The major consequence is therefore non-compliance with privacy laws.  

When we combine these facts with the growing use of the tool and its increasing popularity in companies, the situation is alarming. The promise of ChatGPT being very attractive and its features more and more advanced greatly increases its appeal to employees. 

Of course, OpenIA regulates and adjusts its ChatBot as it goes along and technical measures are supposed to prevent any drift of the algorithm. However, it has been demonstrated on several occasions that it is possible to manipulate the AI, even when it refuses to answer a question, by asking it differently or by giving it a different “personality”, the ChatBot has found itself answering questions it was not supposed to answer.

 

Action needed/recommended

What actions should be taken in this case to eliminate - or at least reduce to a minimum - the risk when using ChatGPT?

  • Ban the tool outright by blocking access to the site and no longer enjoy the benefits that the tool can provide. Banning can eliminate the risk internally on the corporate network but does not guarantee that employees will use it in an uncontrolled manner outside the corporate network with confidential data and personal information that belong to the company.
  • Educate employees on the use of ChatGPT and not prohibit it, and thus continue to enjoy the benefits of the tool while taking the risk that employees will not follow the guidelines and that confidential data and personal information will be in the learning cycle of the ChatBot and will one day be leaked. 

So the best conclusion would be OpenIA's tool when asked whether or not we should prohibit its use within organizations : "Ultimately, it's important for companies to weigh the potential benefits and risks of using language models like me, and put appropriate security and privacy measures in place to protect their sensitive data and personal information."

Contact us to protect your business.

 

Related Posts

phone-handsetcrossmenu