IT Brief Australia - Technology news for CIOs & IT decision-makers
Flux result 1493d6c1 c642 477a adb8 c2f6b2c6f0bd

Unknown AI agents plague 82% of enterprises, survey finds

Tue, 21st Apr 2026 (Yesterday)

The Cloud Security Alliance has published survey findings showing that 82% of enterprises have unknown AI agents in their IT environments. The study also found that 65% of respondents had experienced at least one AI agent-related incident in the past year.

Commissioned by Token Security, the report is based on 418 responses from IT and security professionals. It points to a gap between how well organisations think they can see AI agents and what they are actually finding in their systems.

While 68% of respondents said they had strong visibility into AI agents, 82% had discovered previously unknown agents during the past year. For 41%, that happened more than once.

Those undiscovered agents were most commonly found in internal automation or scripting environments, cited by 51% of respondents. LLM platforms, including custom tools, assistants and plugins, followed at 47%. SaaS tools with built-in automation and developer-created workflows were each cited by 40%.

The survey linked those visibility problems to a growing number of incidents. Among respondents who reported AI agent-related incidents, 61% cited data exposure, 43% cited operational disruption and 35% cited financial losses.

No respondent said those incidents had caused zero material business impact. The finding suggests the issue has moved beyond experimentation and into day-to-day operational risk for security teams.

Control gaps

The findings also show that most organisations are not allowing unrestricted autonomy for AI agents. Instead, many are applying controls at the points where agents make or execute decisions.

Some 53% of organisations said agents can act autonomously for low-risk tasks, with human review for higher-risk actions. Another 24% rely on human-in-the-loop models for most tasks, while only 13% reported fully autonomous models.

When agents exceed their defined scope, 38% of respondents said the action requires human approval and 24% said it must be logged. Only 11% said such actions would be blocked automatically.

This pattern reflects a broader focus on risk and delegated authority. Action risk was cited by 63% of respondents as a key signal for governing agent behaviour, while 53% pointed to human authorisation.

Respondents also indicated where governance is heading. Some 79% said context-aware controls would be important or very important over the next two years, and 66% said they already have clear guardrails in place to define agent boundaries at the outset.

Retirement debt

The survey also identified problems with handling agents after their intended use ends. Only 21% of respondents said their organisations have formal decommissioning processes in place.

The report describes a build-up of what it calls "retirement debt", in which AI agents remain active after they are no longer needed and retain permissions and credentials. That leaves organisations exposed to longer-term risks tied to access control and governance.

These findings suggest lifecycle management is lagging behind deployment. As more businesses introduce AI agents into cloud, SaaS and internal systems, older controls designed for traditional software workloads may not be keeping pace.

In practical terms, organisations appear to be shifting priorities. Asked where they are focusing security efforts, 29% cited risk management, 28% cited monitoring and 19% cited permission control.

This marks a shift from simply identifying AI agents to managing their actions and access once they are operating inside the business. It also indicates that security teams are trying to bring agent behaviour into existing governance structures rather than treating these systems as a separate category.

Hillary Baron, AVP of Research at the Cloud Security Alliance, said the challenge extends across several layers of governance. "AI agent security and governance encompass an interconnected system spanning visibility, lifecycle management, policy, and monitoring. While foundational controls are in place, gaps in consistency and end-of-life management remain. As agents gain greater autonomy, governance must evolve into a more unified, operational model that can sustain control at scale," Baron said.

According to Token Security, the findings show AI agents are moving faster than the systems used to identify and control them. The company develops security tools focused on AI agent identity and access.

"AI agents are outpacing the identity systems meant to secure and control them, and it's already showing up in unknown agents and real incidents in the enterprise," said Itamar Apelblat, Chief Executive Officer and Co-Founder of Token Security. "These agents are not just another workload. They are a new type of identity and legacy controls don't work. Securing them requires an intent-based model, where every agent is continuously scoped to its purpose, which is what makes least privilege actually work for AI."