IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

The Artificial Intelligence arms race

Tue, 10th Oct 2023
FYI, this story is more than a year old

Generative AI is expected to expedite processes because of its ability to harness huge amounts of data and turn this into intelligible, meaningful information succinctly. It will dramatically impact the digital world, with many arguing that it poses a greater threat than a boon. But AI also promises to be a force for good. So, in this new arms race, how will AI be used by malicious attackers, and how can cybersecurity use it to fight back?

Now that the genie is out of the bottle, it's difficult to see how it can be put back again. The EU has made moves to regulate the technology with the AI Act, which is expected to come into force before the year is out, although it won't become mandatory for two or three years. And it's seen a largely positive reaction because, far from attempting to muzzle AI, the legislation is based on acceptable versus unacceptable risk, ensuring it will remain relevant as AI matures.

In the same fashion, cybersecurity also needs to adapt rapidly. It's now a given that attacks will become AI-driven, and we've seen vendors demonstrate the technology's capabilities in this area, from impersonating banks to convincing phishing campaigns with no tell-tale errors to creating reverse shells and malware.

It's also expected to lower the barrier to entry by making it easier and less costly to carry out campaigns or to obtain PhaaS (phishing as a Service) or RaaS (ransomware-as-a-service) toolkits. Estimates suggest it could cut costs for cybercrime gangs by up to 96%, according to a report in New Scientist.

Yet defenders haven't sat on their laurels, with some looking at how AI can be integrated and used to accelerate threat detection and defence, for instance, enabling the business to fight fire with fire. From faster analysis and reporting to reality-based security awareness, AI also promises to make defence more dynamic.

Faster reporting and response
A good example of where AI can provide a quick win is in its ability to ingest and summarise data. Security solutions have to wade through copious amounts of data and in the event of a breach, the Security Operations Centre (SOC) team will need to digest that and supplement it with information from internet sources in order to formulate a response plan. All of this can delay the Mean Time to Response (MTTR). ChatGPT, however, promises to significantly reduce this by consolidating huge amounts of data, even when looking at a single case, into a one or two-page report from more than ten times that, in minutes rather than in hours.

Integrating ChatGPT with a Security Incident and Event Management (SIEM) and Security Orchestration Automation and Response (SOAR) can again see data from multiple sources collated and summarised. From the SOAR, ChatGPT can generate summaries of ongoing cyber investigations or compliance reports, reducing the team's workload and freeing them up to focus on more important tasks. The security team can incorporate SOAR responses into playbooks and use cases, customising their incident response and keeping it relevant to emerging TTPs, ensuring the response is far more immediate.

SOAR playbook outputs are also used to create breach reports for CISOs/CSOs, and ChatGPT can help by condensing these down and outlining the main findings and recommendations for remediation. Other reports can also be reduced in this manner to produce security performance summaries and highlight areas for improvement. This, in turn, will provide the C-suite with data-driven insights and ensure they are kept well-informed and are able to make the necessary business decisions quickly and effectively.

Focused phishing
Further down the ranks, playbooks coupled with AI can be used to improve security awareness. Using the AI to automatically generate a phishing email, the SOAR can extract data from staff profiles on LinkedIn to make the final result far more convincing and use email addresses and connections from past logs to construct the campaign. This then makes the phishing exercise far more real and relevant, which is where phishing exercises typically fall short, making it more likely to improve alertness.

For those that outsource their security operations to a Managed Security Service Provider (MSSP), ChatGPT again reduces MTTR. In the event of a breach, the business wants to know the likely impacts and the steps they need to take to mitigate these. MSSPs can use generative AI in this capacity to accelerate data aggregation and analysis, automating the process and allowing the MSSP to focus on remediation and recovery and advising the end customer.

Of course, generative AI such as ChatGPT, Bard and Bing don't always get it right, sometimes coming back with 'hallucinations' that are just plain wrong. Will threat actors dedicate the necessary resources to weeding out those instances? It's doubtful, and it's here where the security team can gain the upper hand by refining their prompting skills to catch discrepancies and share any instances of hallucination-generating prompts with the wider security community.

Is there a business case?
Even before generative AI came on the scene, the appetite was growing for more granular data insight at board level, and AI directly plays to this. CISOs/CSOs want access to a full control panel to see what their digital company looks like so they can formulate answers to management teams and the board, but they don't have the time to get knee-deep.

With generative AI, they can get to the core of the matter in minutes, and it's this ability, as well as the need to speed up detection and response, that will drive adoption of and justify investment in cybersecurity solutions that are integrated with generative AI.

Security Operations Centre (SOC) analysts will always be the data experts, but in this new world, they will also act as quality control, checking the AI's output. They'll be the overlords that use AI to accelerate MTTR, enabling them to do what they do best – analyse the results, respond, and then improve the organisation's defences against the next similar attack.

There's no reason to regard the emergence of generative AI as any different from any other disruptive technology we've seen over the past few decades. Yes, it ups the ante, but if the sector can react quickly enough and apply AI to fight phishing attacks with phishing awareness, analyse and patch code for weaknesses and exploitable vulnerabilities, and use it to drive down MTTR, we may yet enable the defenders to beat the attackers at their own game.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X