IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Five ways cybercriminals are making use of ChatGPT

Tue, 13th Jun 2023
FYI, this story is more than a year old

It's only been available to the public for around six months, but ChatGPT has quickly become one of the most talked about platforms ever created.

Using the power of generative AI, ChatGPT can help users with everything from copywriting and summarising reports to creating poetry.

While the new tool can clearly deliver big benefits to individuals and businesses, it also has the ability to amplify the severity and complexity of cyber threats. Immediately following its release, cybersecurity experts were warning that it would only be a matter of time before criminals began using it to craft malware or augment phishing attacks.

These predictions have already come true. It has been discovered that cybercriminals have already begun to use the tool to recreate malware strains and perpetrate different types of attacks. As the technology's capabilities increase, so will their ability to mount sophisticated campaigns.

Five Ways ChatGPT is being used by cybercriminals

Assessment of the current security threats faced by organisations shows there are five ways in which ChatGPT is being put to work by cyber criminals. These ways are:

1. More targeted phishing attacks:

Criminals are making use of ChatGPT's Large Language Model (LLM) to move away from universal formats and automate the creation of unique phishing or spoofing emails. Generated with accurate grammar and natural speech patterns, they are tailored to each target.

This shift means that email attacks crafted with the help of this technology look much more convincing, making it harder for recipients to detect and avoid clicking on malicious links that may contain malware or cause other disruptions.

2. More effective identity theft attempts:

As well as mounting more targeted phishing attacks, cybercriminals are making use of ChatGPT to impersonate trusted institutions. This is achieved through the tool's ability to replicate the corporate tone and discourse of a bank or organisation. This material is then used as messages on social media or sent via SMS or email.

Because the messages look very legitimate, people are much more likely to respond to requests for personal identity details, which can then be misused by cybercriminals.

3. Better social engineering attacks:

ChatGPT is also helping cybercriminals to mount sophisticated social engineering attacks. The tool can be used to create fake but very realistic profiles on social media, which trick people into clicking on malicious links or persuading them to share personal information. The key difference is the quality of the materials, which goes way beyond what has traditionally been used in such attacks.

4. The creation of more malicious bots:

Cybercriminals are also making use of ChatGPT to feed other chatbots, thanks to the tool's ability to link via APIs. Users can be convinced they are actually interacting with a human, making them more likely to provide personal details and other valuable data.

5. Generation of sophisticated malware:

The power of ChatGPT means that it can also perform the task of software creation – something that previously required high-level programming knowledge. This means the tool enables cybercriminals with limited technical or no coding skills to develop large quantities of malware.

Protecting against this new wave of threats

Security teams need to be aware of this fresh wave of attacks and ensure they have measures in place that can counter them. Deployment of tools such as endpoint detection and response (EDR) tools can assist by alerting security teams should an attack be attempted.

It's also important that users be educated on the techniques now being harnessed by cyber criminals and the types of attacks they are likely to see. They need to understand that opening just one infected attachment or clicking on a malicious link can have dire consequences for an organisation's entire IT infrastructure.

The capability of tools such as ChatGPT will continue to increase in the months and years ahead. Understanding how they might be used by cybercriminals will become increasingly important for organisations of all sizes.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X