Nearly half of Australian firms hit by AI incidents
Fri, 1st May 2026 (Today)
Proofpoint research shows that nearly half of Australian organisations with AI security controls have experienced a confirmed or suspected AI-related incident, highlighting a gap between AI deployment and incident readiness.
The survey found that 80% of Australian organisations have moved AI assistants beyond the pilot stage, while 60% are not fully confident their controls would detect a compromised AI system. Only 28% said they were fully prepared to investigate an AI-related incident.
The report is based on a survey of more than 1,400 full-time security professionals across 12 countries and 20 industries. In Australia, it depicts companies adopting AI tools across customer support, internal messaging, email workflows and third-party collaboration, while security teams struggle to keep pace.
Among Australian respondents, 72% said they were advancing autonomous agents. More than half, 53%, described security as catching up, inconsistent or reactive, while 39% reported a suspicious or confirmed AI-related incident in live environments.
The research also suggests existing controls are not always inspiring confidence. While 63% of Australian organisations said they had AI security coverage, 60% were not fully confident those measures would detect a compromised AI system, and 44% of organisations with controls still reported an AI-related incident.
Attack surface
According to the study, threat exposure is spreading beyond email into a wider range of business tools. Email remained the most common threat vector in Australia at 53%, followed by SaaS and cloud applications at 46%, AI assistants or agents at 39%, and social and messaging platforms at 37%.
For organisations that had already experienced an AI-related incident, the figures were higher across all channels. Exposure rose to 67% in third-party SaaS and cloud applications, while 62% involved AI systems.
One concern is the rise of prompt injection attacks, in which malicious instructions embedded in apparently ordinary content can manipulate AI assistants into disclosing sensitive information. The report also highlighted the risk that AI agents with access to internal systems could expose information staff were not meant to see, including salary records and confidential files.
Investigation remains another weak point. More than one-third of Australian respondents, 36%, said they had difficulty correlating threats across channels, an issue that becomes more acute when incidents span email, collaboration platforms and cloud systems.
Australia also lagged some regional counterparts on readiness to investigate, with 28% saying they were fully prepared, compared with 32% in Singapore and 57% in India. That gap matters because AI-related incidents often leave traces across multiple systems rather than a single application.
Tool sprawl
The survey found that security teams are also dealing with fragmented technology environments. Almost all Australian organisations surveyed, 97%, said managing multiple security tools was at least moderately challenging, and 45% described it as very or extremely difficult.
Respondents cited cost pressures, integration problems and overlapping products as key reasons. Operational cost pressures were cited by 45%, integration challenges by 43%, and redundant tools by 39%.
That complexity is shaping buying plans. More than half of Australian organisations, 56%, said they were actively pursuing vendor and tool consolidation, and 54% said a unified platform was more effective than point products.
Organisations also indicated that AI security spending and coverage are likely to increase. The research found that 68% plan to expand AI protections, 58% intend to extend collaboration channel coverage, and 56% expect to move towards a unified platform approach.
Adrian Covich, vice president of systems engineering, APJ, at Proofpoint, linked the findings to recent concerns in Australia about the handling of personal information by AI tools.
"We're already seeing Australian organisations grappling with the threats posed by AI, particularly as agentic adoption grows," Covich said. "A recent example is the release of sensitive NSW government agency information via ChatGPT.
Australian organisations are scaling AI quickly, with huge potential for productivity gains. However, this lack of preparedness carries real consequences. Without a significant change in the security posture of AI systems, these kinds of breaches are likely to become much more commonplace."
The report argues that the main issue is not that AI creates entirely new security categories, but that it can magnify long-standing weaknesses in how organisations handle access, data and authentication.
"While AI has introduced new risks, such as prompt engineering, its bigger impact has been amplifying the risks we've always had," said Ryan Kalember. "Running untrusted code, mishandling sensitive data, and losing control of credentials are the same challenges humans have created for decades. AI executes them at machine speed and scale.
When organisations hand AI the keys to act on their behalf across customers, partners and internal systems, the blast radius of any one of those failures grows dramatically. The answer isn't to treat AI as a novel threat category, but to apply rigorous, proven controls to what AI touches, what it runs, and what it's allowed to authenticate as.
Organisations that get that foundation right early will scale AI confidently. Those that don't are just automating their own exposure."