IT Brief Australia - Technology news for CIOs & IT decision-makers
Shield protecting ai brain nodes illustration autonomous systems governance

Forrester launches AEGIS to help CISOs secure agentic AI systems

Fri, 22nd Aug 2025

Forrester has introduced the Agentic AI Enterprise Guardrails for Information Security (AEGIS), a six-domain framework that aims to help Chief Information Security Officers (CISOs) secure, govern, and operationalise autonomous AI agents across enterprises.

As machine-driven systems become central to operations, Forrester's research highlights that legacy security architectures are inadequate to counteract new risks brought by agentic AI, including goal hijacking, cognitive corruption, and cascading system failures. The company asserts that a shift from infrastructure-centric models to intent-centric controls is now required for effective enterprise defence.

Agentic AI challenges

The rise of agentic AI presents unique threats. These systems, capable of planning, adapting, and executing tasks at machine speed, introduce both increased efficiency and considerable security challenges. Emergent behaviours could result in privilege escalation, with AI agents potentially bypassing entitlements and acting in unpredictable ways to fulfil objectives. Additionally, data hallucinations or corruptions could trigger severe failures across interconnected systems, complicating both detection and response efforts.

"Agentic AI is more than just another emerging tech trend. It represents a fundamental shift in how enterprises operate," said Jeff Pollard, VP and principal analyst at Forrester in a recent blog. "These systems are distributed, autonomous, scalable, and designed to exhibit emergent behaviour. They don't just follow instructions; they adapt, plan, and act. As enterprises race to deploy agentic AI, CISOs must pivot from securing systems to securing intent. That's why Forrester built AEGIS."

According to Forrester's findings, the absence of causal traceability within agentic AI systems limits forensic analysis, as the systems' logic pathways can be non-linear and opaque. Analysts warn that the relentless, problem-solving nature of AI agents stands in stark contrast to human users, who generally display predictable behaviours and finite willpower.

Existing security architectures

Traditional cybersecurity methods, built around human behaviour and conventional infrastructure, have not evolved to address the autonomy and adaptability of AI agents. Forrester's report notes that most current detection and prevention mechanisms are not tailored for agentic systems, and few controls exist that actively manage the risks posed by these agents in real time.

The research highlights the need for organisations to move away from binary 'block or allow' security models, and instead focus on mechanisms that secure the underlying intent and actions of AI agents - a paradigm shift in enterprise security management.

Emphasis on governance

Forrester advises a strong focus on governance as the starting point for any organisation working with autonomous AI. This involves bringing together security, legal, privacy, compliance, IT, and business teams to establish cross-functional governance structures that oversee the design, deployment, and ongoing risk management of agentic AI systems.

Recommendations include continuous monitoring of controls, maintaining detailed inventories of AI systems - including risk classifications - and developing clear AI acceptable use policies. The aim is to manage risks dynamically, rather than relying on static compliance regimes.

Rethinking Zero Trust

The report also underscores the need to adapt Zero Trust principles for agentic AI environments, moving towards 'least agency' rather than simply 'least privilege'. In practice, this means applying contextual and continuous authentication, setting granular access controls, and implementing methods to monitor and validate agent behaviours and intents on an ongoing basis.

Such an approach requires capabilities beyond those currently found in most organisations, particularly in the context of continuous behavioural monitoring and technical validation tailored for non-human entities.

Phased implementation

The AEGIS framework recommends a multi-domain, step-by-step approach. Initial efforts should focus on establishing governance and risk management processes within the first six months. Over the subsequent 12 to 18 months, attention should shift toward building technical controls - such as identity management, data security, and threat management - culminating in the optimisation of custom Zero Trust principles adapted for agentic AI.

This phased roadmap is designed to help organisations gradually build capability and flexibility, while maintaining vigilance against emerging risks. The framework specifies six critical areas of focus: governance, identity management, data security, application security, threat management, and tailored Zero Trust strategies.

As enterprises accelerate deployments of AI agents, industry experts highlight the urgency of transitioning from infrastructure-centric protections towards intent-centric safeguards, reflecting the new realities introduced by agentic AI.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X