IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

ChatGPT is here to stay: Can organisations harness it safely?

Thu, 19th Oct 2023
FYI, this story is more than a year old

From the leaking of trade secrets to confidentiality, copyright, and ethical use, different risks surround the use of ChatGPT. Right now, every client or prospect conversation we are having seems to involve some mention of the platform. It gained traction quickly. ChatGPT was named the fastest-growing application of all time in January 2023, hitting the 100 million active users milestone in just two months. Fast forward to June, and the website was generating roughly 1.6 billion monthly visits.

However, amidst the hype, it's important for organisations and their security teams to recognise the implications and potential pitfalls of ChatGPT.   

We'll dive into three that are important for enterprises to be aware of.  
 
#1 – It's being leveraged by threat actors 

While ChatGPT so far has not been infiltrated directly, threat actors are believed already to be leveraging it for nefarious means. Indeed, Check Point Research has highlighted instances where cybercriminals have begun using the platform to assist in developing information-stealing malware code and create increasingly convincing spear-phishing emails.  

We believe the latter will remain the primary use of ChatGPT among threat actors for the foreseeable – something that is likely to create significant problems.   

Security awareness training typically centres around spotting discrepancies, such as misspellings and awkward subject line titles, which could indicate something's wrong. However, many of those typical indicators disappear when a badly written email is put through ChatGPT with the request of making it sound like it comes from a government agency, consumer brand, or other. Further, it also means threat actors don't need to rely on their first language, using generative AI to translate phishing emails effectively across several others.  

#2 – ChatGPT suffered a data breach in 2023 

One of the big points that sticks out from a security perspective is the fact that ChatGPT itself has already suffered a data breach in 2023 due to a bug in an open-source library.  

On deeper investigation, OpenAI revealed that this may have caused unintentional visibility of payment-related information of 1.2% of ChatGPT subscribers who were active during a specific nine-hour window.  

Looking at the platform's huge uptake, it's the ideal site for a 'watering hole' attack among threat actors. If cybercriminals could infect it successfully through other potentially hidden vulnerabilities and serve malicious code through it, they could potentially impact millions of users.   
 
#3 – Misuse by employees  

ChatGPT works in a similar way to social media – once you have posted something, it's out there. Therefore, we must also consider the potential for misuse by employees.  

You can't put the genie back in the bottle. However, this isn't necessarily understood yet, as was demonstrated with an incident involving Samsung. One of the company's developers inadvertently pasted some confidential data – specifically, source code for a new program – into ChatGPT to see if it could help fix some source code.  

Unfortunately, ChatGPT retains that user input to further train itself. As a result, if another company were to search for something similar, they may well have been openly provided with some of that confidential Samsung data.  

To help companies prevent this, OpenAI recently rolled out ChatGPT Enterprise – a paid-for subscription service offering assurances that customer prompts and company data will not be used for training OpenAI models. However, this is only available with the paid subscription, with no guarantees that organisations will acquire this or that employees will adhere to its proper usage.   
 
How to harness ChatGPT in a secure manner 

In response to these risks, some firms have blocked the use of ChatGPT outright. However, in the long term, this is likely to impede overall enterprise performance.  

If harnessed in the right way, ChatGPT can offer many benefits. AI is incredibly effective when doing things humans are not good at, completing those jobs that take time and aren't enjoyable. If you take a large sum of data, AI is great at pulling out the correlations and themes that are interesting to look at.  

The key is to use AI to enhance productivity, freeing employees up to focus on high-value tasks that require creativity or input into matters of subjectivity, which AI simply is not equipped to navigate.  

With this in mind, instead of blocking it, organisations should find ways to harness ChatGPT in a secure and safe manner. Of course, OpenAI's own subscription service can be one form of improvement on this front. However, for maximum protection, enterprises should be looking to embrace a multi-layered security strategy comprising a variety of tools.   

At Menlo Security, we recommend the use of isolation technology. Not only can this be used as a DLP tool, allowing organisations to control what users can or cannot copy or paste, such as files, images and text, to an external site where it can be misused. It can also be used to record data from sessions to enable organisations to keep track of end-user policy violations in weblogs on platforms such as ChatGPT, like the submitting of sensitive data.  

Not only that, but isolation can ensure that all active content is executed in a cloud-based browser rather than on a user's end device, ensuring that malicious payloads never have the opportunity to reach the target endpoint.  

These capabilities make isolation a key tool to consider for security teams looking to expand their arsenal of protection solutions, particularly when prioritising the safe use of platforms like ChatGPT.