Rubrik warns AI agents outpace security guardrails
Wed, 6th May 2026 (Today)
Rubrik has released Australian survey findings showing most organisations expect AI agents to outpace their security safeguards within a year, pointing to a widening gap between AI adoption and governance.
The research surveyed more than 1,600 IT and security leaders and highlighted concerns about weak visibility, poor identity governance, and limited recovery options as businesses deploy more autonomous systems. In Australia, 88 per cent of respondents said they expected AI agents to move faster than their organisation's security guardrails over the next year.
Only 22 per cent said they had full visibility into the agents operating in their environments. Rubrik suggested that figure may be an overestimate, indicating many organisations have less oversight than they believe.
The findings add to a broader debate over how companies manage the spread of software agents that can make decisions, take actions, and interact with sensitive data with limited human input. Security teams have long struggled to track machine identities, and the report suggests the number of non-human identities linked to AI agents is growing faster than many organisations can govern them.
That growth has created what researchers described as a shadow workforce, with digital identities operating with persistent access and little oversight. These conditions can create openings for misuse, compromise, and lateral movement within corporate systems.
Operational strain
The Australian results also suggest the promised productivity gains from AI agents have yet to translate smoothly into operational practice. Eighty per cent of respondents said agents required more manual oversight than the efficiency they delivered.
Every Australian respondent said their organisation lacked the ability to roll back agent actions without disrupting systems. This points to a significant weakness in recovery planning at a time when companies are under pressure to automate more business and technology functions.
Recovery concerns were widespread, with 96 per cent of Australian leaders saying they were concerned about meeting recovery objectives as threats linked to agent-driven systems increase.
The findings suggest the challenge goes beyond preventing cyber attacks. They also raise questions about whether organisations can quickly regain control when autonomous software behaves unexpectedly or is manipulated by an attacker.
Attack expectations
Respondents also signalled concern about how AI agents may reshape the threat landscape. Nearly half, or 42 per cent, of Australian respondents said they expected agentic systems to drive most attacks in the coming year.
That expectation reflects a shift in security thinking as defenders and attackers alike test systems that can operate at machine speed. Autonomous tools can reduce the time needed to conduct reconnaissance, move across systems, and exploit weaknesses, while making it harder to distinguish between insider activity and external compromise.
For boards and executive teams, the figures suggest AI planning is becoming more closely tied to resilience and risk management. If organisations press ahead with deployment without strengthening controls, they may increase the likelihood of failures that are difficult to contain or reverse.
The report combined global survey responses with technical analysis of attack vectors across the tool, cognitive, and identity layers of AI systems. The Australian findings sit within that broader examination of how autonomous systems are changing security assumptions.
Kavitha Mariappan, Chief Transformation Officer at Rubrik, said companies were deploying systems they could not yet fully supervise or recover from. "AI adoption is outpacing our ability to control it. Enterprises are struggling because they've deployed systems they can't fully observe, govern, or restore," she said.
She said the issue had moved beyond a simple debate about whether AI carries risk. "We have to move past the debate of whether AI is risky and address the harder reality: as decision-making shifts from human to machine, the critical challenge for every leader is maintaining operational safety in an increasingly autonomous landscape," Mariappan said.