IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

AI arms race fuels surge in cybercrime & deepfake scams

Today

Cybersecurity experts are warning that the rapid evolution of artificial intelligence is providing new tools for cybercriminals, heightening digital risks for Australian organisations and individuals.

In recent months, security researchers have observed a notable uptick in the deployment of infostealer malware across Australia. These malicious programmes infiltrate both personal and corporate systems, quietly harvesting key data including banking details, emails, and passwords stored in browsers.

Research undertaken by a Sydney-based cybersecurity firm revealed that more than 30,000 Australians had their banking credentials stolen by infostealer malware between 2021 and 2025. The malware's ability to capture authentication cookies has been particularly concerning, as it allows criminals to circumvent multi-factor authentication (MFA) safeguards. The Australian Signals Directorate has issued advisories about the growing prominence of these threats within the past year, underscoring their role in criminal cyber activities throughout 2023.

Small businesses are especially exposed due to limited cybersecurity resources. Cyber incidents often go undetected until financial losses or data breaches become apparent, given the covert nature of infostealer attacks.

Business Email Compromise (BEC) schemes have also evolved, with artificial intelligence now employed to automate highly personalised phishing campaigns. Attackers scrape public records and analyse targets' communication styles to craft emails nearly indistinguishable from those of legitimate executives or business partners.

This new wave of social engineering has proven effective at bypassing MFA controls. According to cybersecurity firm Kroll, "90% of organisations investigated for BEC attacks had MFA in place at the time of unauthorised access, indicating that attackers are effectively bypassing these security measures."

The development and use of deepfake technology present an additional challenge. Generative AI tools are capable of producing realistic audio and video forgeries, misleading not only individuals but entire organisations. Australian businesses have reportedly lost tens of millions of dollars to deepfake scams in the past year. MasterCard has stated, "20% of businesses have been targeted over the past year due to economic slowdown and insufficient cybersecurity investments." Fraudsters have used AI-generated images, audio, and even orchestrated video calls with convincing fake executive impersonations to conduct scams.

Artificial intelligence is also being leveraged for real-time impersonation in text-based communications. Cybercriminals deploy AI-powered chatbots to impersonate individuals during live phishing attempts via email or social media, posing a particularly serious risk to those in sectors such as finance, law, and government.

Security organisations are advocating for a shift from traditional defence strategies. Signature-based antivirus and rudimentary spam filters are proving inadequate against the latest threats. There is increasing emphasis on building an intelligent, adaptive cybersecurity approach, making use of AI for both detection and rapid response.

Borderless CS, a cybersecurity consultancy in Victoria, is working with small and medium-sized enterprises (SMEs) and local councils to address these challenges. The consultancy urges, "AI must be met with AI," and recommends a multi-layered defence. This includes behaviour-based detection tools, zero-trust network configurations, and ongoing cyber awareness initiatives tailored to keep pace with emerging risks.

Borderless CS also highlights the persistent risk of human error within organisations. The consultancy notes, "Human error is still the easiest way in, and AI makes that path even smoother for attackers." To mitigate this, the firm advocates regular employee training focused on recognising deepfakes, detecting spear phishing attempts, and adhering to communication protocols when confronted with suspicious activities.

Recommended best practices for Australians include upgrading endpoint protections with AI-powered, behaviour-focused tools, reinforcing MFA practices with hardware or app-based solutions that offer geofencing, and educating employees through regular phishing simulations and deepfake awareness workshops. The adoption of zero trust principles—assuming no internal traffic is inherently safe and granting only minimal, necessary access—is strongly encouraged. Monitoring for potential credential theft via dark web scanning services is also advised.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X