Australian firms increase AI use in cybersecurity despite new risks
New research from Trend Micro indicates a rapid uptake of artificial intelligence within cybersecurity functions among Australian organisations, alongside rising concern over new types of cyber risk.
The study shows that 62% of Australian businesses are currently using AI-driven tools within their cybersecurity programmes, with a further 19% actively considering adoption. In total, 93% of Australian respondents express willingness to use AI in some capacity for cybersecurity purposes.
Adoption and reliance
Nearly half of respondents, or 45%, are already reliant on AI solutions for key processes such as automated asset discovery, risk prioritisation and anomaly detection. Furthermore, 38% of surveyed Australian organisations list AI and automation as top priorities for improving cybersecurity practices over the next year.
Speaking on the findings, Andrew Philp, ANZ Field CISO at Trend Micro, stated,
"AI is already transforming how organisations across Australia defend against cyber threats – from faster anomaly detection to automating manual, time-consuming tasks. But as security teams adopt AI, so too do cybercriminals, and as a result, the threat landscape is evolving at speed. Our latest research makes one thing clear: security can't be added as an afterthought, it must be built into AI systems from the get-go."
The study, carried out by Sapio Research on behalf of Trend Micro, involved 2,250 participants globally, with 100 respondents from Australia responsible for IT and cybersecurity across various sectors and company sizes. The research was completed in March 2025.
New risks on the horizon
Despite the enthusiasm for AI in cybersecurity, a significant majority of local businesses foresee new risks. Eighty-seven percent of Australian organisations believe that AI adoption will negatively increase their cyber risk exposure within the next three to five years. Forty-three percent expect a rise in the scale and complexity of AI-driven attacks requiring comprehensive changes to current cybersecurity strategies.
The research highlights several specific risk areas. These include the potential exposure of sensitive data, uncertainty regarding how proprietary information is processed and stored by AI, and the risk of trusted models being exploited or circumvented. Respondents also pointed to increased compliance requirements and challenges inherent in monitoring a greater number of endpoints, application programming interfaces (APIs), and instances of shadow IT that accompany expanded AI integration.
Live testing and vulnerabilities
The ongoing risk is underscored by recent results from Trend Micro's Pwn2Own event in Berlin, where the AI category was featured for the first time. Twelve entries targeted four major AI frameworks, generating a snapshot of current AI security vulnerabilities. The NVIDIA Triton Inference Server was the most frequently targeted platform, while Chroma, Redis, and the NVIDIA Container Toolkit were also compromised—sometimes with only a single vulnerability exploited to achieve full system compromise.
Across these frameworks, participants uncovered seven unique zero-day vulnerabilities. Vendors affected have been given 90 days to address these flaws before further technical details are publicly disclosed.
Call for proactive security
Trend Micro emphasises the importance of integrating security considerations into AI adoption at every stage. The current environment, marked by both increasing adoption and escalating concern over risk, presents challenges for organisations working to balance opportunity and mitigation measures as AI becomes more deeply embedded in their IT environments.
The study concludes that Australian technology and security leaders should continually assess their governance and operational approaches in light of the evolving nature of threats accelerated by AI technology.