IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Generative AI: A disruptive force at the hands of cyber attackers
Tue, 31st Oct 2023

The introduction of publicly available Generative AI tools at the end of 2022 launched us into one of the biggest technological revolutions in human history.

Some claim that it is just as big or even bigger than the introduction of the internet, cell phones, smartphones and social media. The adoption and development rate of these new Generative AI technologies is like nothing we have seen before.

While there are many implications to this AI revolution, let's focus on the cyber security world.

Generative AI tools are designed to be masterful co-pilots. When it comes to ethical hackers or white hats, many already admit to relying on AI to automate tasks, analyse data, identify vulnerabilities, and more.

We can assume that black hats are using AI, too. Even though we can't really survey black hats, there is evidence they are using AI to find vulnerabilities in applications and platforms, quickly run reconnaissance operations to find zero-day vulnerabilities and analyse their data.

As generative AI chatbots digest every piece of data, their database grows exponentially and becomes more accurate. With that, they can be manipulated to expose vulnerabilities in applications, platforms, software, and security tools and mechanisms. They can even write code to bypass applications' security layers.

When Generative AI finds its way into the wrong hands, it can be used for a variety of malicious purposes. These are just a few of the ways bad actors can enlist AI as a co-pilot:

Phishing attacks: AI's powerful editing capabilities make it a perfect co-pilot for generating phishing campaigns. AI can be used to generate well-written, authentic-looking emails, landing pages, URLs and text messages.

As a result, it opens the door for more non-English-speaking malicious actors to get into the game. With the help of AI, for instance, it's now much easier for them to generate more convincing, higher-quality phishing attacks on a global scale.

Before AI, we could often spot a malicious landing page, email, or text message because of incorrect grammar or unusual wording. Now, it is much harder to tell the difference between legitimate and AI-generated fake content. With that in mind, we can expect to see not only more phishing campaigns in the future but more successful ones.

Distribution of malicious code libraries: Generative AI can also be used as a co-pilot to speed up code development. However, my advice is to proceed with caution if you use AI Chat tools to download code libraries when building applications.

Bad actors are flooding AI databases with libraries of nefarious codes, spreading them across development environments. That's why it's especially important to carefully vet libraries by checking the creation date and download count before you use them.

Keep in mind that even libraries with a history of many downloads can be malicious. My strong recommendation is to avoid using AI tools altogether to download code libraries and packages. It simply is not worth the risk.

Smarter bots . . . many more: With the help of their AI co-pilots, ill-intended actors can now manipulate AI chats to easily build new advanced bot scripts, otherwise known as zero-day bots.

As if that is not enough, new AI chat tools have been designed specifically for nefarious purposes and made available on the dark web. The tools help hackers and fraudsters to generate new automated scripts for their malicious cyber purposes.

With the emergence of AI, we can expect that this bad bot situation is going to get worse. Today, 30% of internet traffic is driven by bad bots, a number that is certain to rise in the future.

The consequences? Standard bot protection tools will not be able to defend against the growing number and variety of these new AI-generated bot scripts. CAPTCHA might also see its demise as more sophisticated AI-generated bots circumvent traditional CAPTCHA challenges.

To protect organisations adequately, a new form of detection is needed, whether it be unique custom challenges, blockchain-based crypto challenges, new attestation and identity-based user validation services, or even AI-generated challenges for bot mitigation.

Generative AI tools in the wrong hands are a serious threat, which is why their use must be regulated properly. With an AI co-pilot, hackers become tenfold smarter and faster. They can cut the time it takes to discover a vulnerability by 90% and come up with a new one any time an older one is patched.

Unfortunately, regulation lags behind technology. To fill this gap, security teams must deploy advanced application protection solutions that use behavioural algorithms to automatically detect and block zero-day attacks in real-time before they materialise.