IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
ArtificiaI Intelligence and Cybersecurity: The Bad and the Good
Fri, 1st Dec 2023

It’s frightening. It seems that artificial intelligence can be, and is already being used, to aid every aspect of cybercrime and cyber-attack. 

AI can be used to convincingly mimic the voice of a real person. AI tools can enable scammers to craft better and grammatically correct phishing messages (when bad grammar is often a giveaway) and translate these into multiple languages, allowing them to widen the scope of an attack. AI can also probe systems for vulnerabilities and then effectively craft effective attacks.

The possibilities are endless, and authorities are already sounding dire warnings. Air Marshal Darren Goldie, recently appointed as the inaugural National Cyber Security Coordinator in the Department of Home Affairs, told The Australian Financial Review Cyber Summit that the cyber threat environment would change significantly in the next few years.

Earlier this year, the UK Government flagged AI, in all its manifestations, as a strategic risk and added it to the UK’s National Risk Register, which outlines the most serious risks facing the UK.

AI Being Used to Launch Deceptive Cyber Attacks

Voice emulation using AI is not new, but like many other AI-based technologies, it has become more readily and widely available, significantly increasing the likelihood of it being used to dupe the unsuspecting, like this case, reported by Forbes back in 2021.

A branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognised as a director of his parent company, telling him to transfer 35 million USD to pay for an acquisition the company was making and saying details would be provided in an email from a named lawyer. The manager duly received emails that appeared to be from the lawyer and from his director and initiated the transfers, but the whole thing was a carefully crafted cybercrime.

Back in 2018, TaskRabbit, an online marketplace for freelance labour hire, was hit by a massive DDoS attack said to have been controlled by AI. The social security and bank account numbers of 3.8 million TaskRabbit users were exfiltrated after a DDoS attack using a botnet controlled by AI, said to have been so effective that the entire TaskRabbit site had to be disabled.

AI As an Aid to Cybersecurity

While AI is being weaponised to launch cybercrimes, there are also many ways it can enhance cybersecurity and better counter cyber-attacks of all kinds: those leveraging AI and those using more traditional approaches.

Once a system has been compromised, any attempt by the attacker to exploit their access inevitably triggers abnormal behaviour in some part of the system. AI tools that constantly monitor systems operations can be very effective in rapidly detecting such abnormalities. They can alert humans to their discovery and, in many cases, are able to initiate appropriate countermeasures in far less time than it would take a human to do the same.

In this context, machine learning is particularly useful. Machine Learning (ML), a subset of AI, is the process of teaching algorithms to learn patterns from existing data to take appropriate action in response to new data.

It is used in cybersecurity to enhance its knowledge and capabilities through adaptive learning from various data sources. ML is trained to improve the security of individual endpoints and the broader organisational network by continuously monitoring, identifying, detecting, and mitigating both known and unknown threats. This is especially important in Deep Learning, where AI proactively addresses evolving security risks, safeguarding organisations in a constantly changing digital landscape.

Any online IT system ingests vast amounts of data, far more than any human can analyse unaided. Such a large volume of data enables a machine learning system to gain an excellent understanding of normal operation and detect any anomalies. However, these protection mechanisms are purely reactive: the AI looks for evidence of a breach and then responds.

AI has the power to make security proactive

Attack path analysis or attack path modelling analyses an IT environment to determine the most likely and most effective paths attackers could take. In a large IT system, it would be a monumental task for an IT team, but AI tools can analyse every possible attack path and model all possible attacker scenarios.

Australia’s Commonwealth Bank recently revealed the extent to which it was using AI and machine learning to counter cyber threats. The bank’s General Manager of Cyber Security told the South by Southwest conference in Sydney that the number of online activities being scanned for threats had risen from 80 million per week to 240 billion in three years.

He said the bank was working with an AI company to design, build, test, deploy and govern AI models and that in every single use case, they performed significantly better than the systems they replaced.

At Jamf, we have a machine learning engine called MI:RIAM, standing for Machine Intelligence: Real-time Insights and Analytics Machine, an advanced machine learning technology that drives threat intelligence while aiding in the threat hunting of other security solutions. It can also enhance security protection capabilities to identify more threats and prevent threats from impacting systems.

In conclusion, AI can be a double-edged sword in cybersecurity, being used for malicious purposes and enhancing security measures. While AI can be deployed to mimic human voices and craft phishing messages, it can also be used to detect threats and vulnerabilities in a system. Machine learning, a subset of AI, can be trained to learn patterns and identify anomalies, making cybersecurity more proactive. 

It’s hardly surprising then that cybersecurity is, according to Forrester Research, the fastest growing application for AI, particularly AI tools that can monitor for attacks in real-time and respond appropriately.