IT Brief Australia - Technology news for CIOs & IT decision-makers
Flux result dd4e24eb d611 436e 8eee 5f94a368885c

LevelBlue warns of GhostOps risk from rogue AI agents

Thu, 23rd Apr 2026 (Today)

LevelBlue has warned that unauthorised AI agents are creating new cybersecurity risks inside large organisations, a trend it calls "GhostOps".

Many businesses are rapidly adopting artificial intelligence agents to automate work, often before governance processes and security oversight are in place. According to LevelBlue, that leaves security teams with limited visibility over systems that can act inside company environments rather than simply process information.

Research cited by LevelBlue suggests the issue is already widespread. Microsoft's Cyber Pulse found that 29 per cent of employees had used unsanctioned AI agents for work tasks, while 80 per cent of Fortune 500 organisations were running active AI agents.

The concern differs from traditional shadow IT, where staff use unapproved software or services outside formal channels. In this case, agents may retain prompts and files, connect to application programming interfaces and software services, and carry out chains of tasks without direct human supervision.

That changes the risk profile for employers, particularly when agents are introduced by developers, analysts, business teams, or individual employees trying to eliminate repetitive work. In some organisations, the number of agents deployed this way can reach into the tens or hundreds without architecture review or formal approval.

These deployments can create several security and governance problems, including the spread of credentials and secrets across agent integrations, as well as prompt injection attacks that can steer an agent towards unsafe actions.

Growing blind spot

The rapid growth of open-source agent ecosystems adds another layer of risk, LevelBlue said. Many frameworks let users install plug-ins or skills that connect directly to enterprise tools, increasing software supply chain exposure and making dependency risks harder to track.

As a result, security teams may struggle to establish who deployed an agent, which systems it accessed, and what actions it carried out. That can complicate investigations after an incident and leave gaps in accountability.

"Unlike traditional shadow IT, the risk is bigger this time because AI agents don't just store or share data; they can take actions inside company systems. When an AI agent interacts with tools and data, it becomes an operational actor inside the environment. If organisations cannot see that activity clearly, they lose visibility not just of information but of the actions taking place inside their systems," said Grant Hutchons, Director of Security Solution Engineering and Architecture, APAC, LevelBlue.

LevelBlue argues that trying to ban AI agents outright is unlikely to work. Businesses face pressure to improve efficiency, and employees blocked from using AI tools at work may turn instead to personal devices or unmanaged accounts.

That creates tension for corporate technology leaders, who are being asked to support experimentation and productivity gains while also maintaining control over data handling, user identities, and system activity.

Governance response

A more practical response combines governance with monitoring rather than relying on blanket bans, according to LevelBlue. Governance frameworks can set out approved deployment pathways, reference architectures, identity controls, and data protection policies, while monitoring tools can help teams identify unauthorised agents across endpoints, identities, and cloud systems.

In effect, LevelBlue is calling for AI agent oversight to be treated as a routine part of enterprise risk management. That would mean establishing clear rules for how agents are deployed, what they can access, and how their activity is recorded.

"Security teams often cannot determine who deployed the agent, what systems it accessed, or what actions it performed. This makes the investigation much harder when something does go wrong. Organisations can't simply ban AI agents because the productivity benefits are too significant, and strict restrictions often see employees experimenting instead on their personal devices or using unmanaged accounts. Instead, organisations must integrate governance models that ensure they can maintain oversight with AI adoption," Hutchons said.

The warning reflects a broader shift in how technology risk develops inside organisations. Earlier waves of shadow IT centred on decentralised software adoption. AI agents, by contrast, introduce decentralised autonomy, where software can make or execute decisions inside business processes with limited direct supervision.

For Australian organisations, the message is that the problem may already exist whether or not it has been formally recognised. Companies that have encouraged AI experimentation, or simply failed to track it closely, may now have autonomous or semi-autonomous systems operating in finance, customer support, software development, or internal operations.

"Organisations should work on the assumption that GhostOps already exist in their environment and start by measuring it. Once it is known where agents are operating and what they are doing, it becomes easier to put the right guardrails in place.

"The emergence of GhostOps signals a broader shift in how technology risk develops for many organisations. Shadow IT reflected decentralised technology adoption, whereas AI agents introduce decentralised autonomy," Hutchons said.