Exclusive: AI agents spark new security risks as SailPoint redefines identity
Enterprises are barrelling into the AI era, embedding autonomous agents across systems in a bid to boost efficiency, agility, and innovation. But in doing so, many are sleepwalking into a full-blown identity security crisis.
As AI agents - digital entities capable of acting on behalf of humans - proliferate across the enterprise, they're exposing critical vulnerabilities that few organisations are adequately prepared to confront.
SailPoint, a global leader in identity security, is sounding the alarm.
The company's latest research report, reveals a troubling paradox: 82% of organisations already use AI agents, and 98% plan to expand their use - even as 96% of technology professionals see them as a growing security risk. These agents aren't passive tools; they're active, autonomous actors with access to sensitive data, systems, and workflows. And they're behaving unpredictably.
Nam Lam, Group Vice President for Australia and New Zealand at SailPoint, says this shift is unlike anything the identity security space has faced.
"AI in itself brings about demonstrable value to businesses. But at the same time, we also acknowledge that it's a force multiplier for threats," Lam says. "If you were to ask yourself, are human beings predictable? They are not. So AI agents are exactly the same."
That unpredictability, paired with autonomy and privilege, is a combustible mix - one that attackers are already exploiting.
The numbers are stark. According to SailPoint's research, 72% of security professionals believe AI agents are riskier than traditional machine identities.
These agents have triggered real-world incidents: 80% of organisations say AI agents have taken unintended actions, including unauthorised access to systems (39%) and the sharing of sensitive data (33%). Shockingly, nearly one in four respondents say AI agents have been tricked into revealing access credentials.
Despite this, only 44% of organisations report having governance policies in place to manage these agents.
"If organisations are not all brought into how to protect themselves around AI agents, then there's got to be blind spots... that threat actors can tap into," Lam warns. Compounding the problem, many organisations remain in denial about the scope of the issue. "The take-up of AI is just going wild," Lam observes, "but a lot of organisations are not ready for it in terms of being able to protect themselves from the potential threats it brings."
That protection gap isn't just technical - it's cultural and operational.
While IT and cybersecurity teams may be attuned to the risks, other parts of the business lag behind. This disjointed awareness creates security blind spots in an environment that is already hard to control. The challenge is made more urgent by hybrid work, cloud expansion, and a broader digital footprint, which has become fertile ground for AI agents to operate - and misfire - at scale.
SailPoint has been leveraging AI within its own platform for years, powering the automation of identity and access management across sprawling application environments.
Now, the company has introduced generative AI into its stack, including the 'Harbor Pilot' assistant - a ChatGPT-style tool designed specifically for identity workflows.
"You can ask it anything: 'How do I create a workflow?' It will come back with a fairly humanised response," Lam explains. "All the automation that you see within our platform is underpinned by AI itself."
But while SailPoint integrates AI into its defences, bad actors are doing the same.
"If cybersecurity capabilities like ours are not using AI ourselves, then we're in the backward position," Lam says. It's a digital arms race, and AI agents are on both sides.
One of SailPoint's boldest positions is the recognition of AI agents as a distinct identity type - not just extensions of existing machine identities, but something new that needs its own rules, oversight, and safeguards. "We recognise that AI agents behave more like humans than machines," Lam says. "It's going to require a new class of governance, visibility, and control."
To meet this challenge, SailPoint is working closely with ecosystem partners like Deloitte to help organisations rapidly mature their identity governance practices. "We meet organisations wherever they are within their level of maturity," Lam says.
"Many aren't even thinking about governing AI agents, so we help them build that maturity out from there."
Awareness, he adds, is growing - but execution remains a challenge.
"What reflects the momentum is that we are seeing a lot of organisations having the awareness of the risk posed by AI. But organisations are still struggling to respond effectively," Lam explains.
"The proliferation of AI agents is growing so fast… the thought process around the security and the governance around AI agents, I wouldn't say it's been an afterthought, but organisations struggle to pull together the people, processes, and technologies to govern and secure themselves effectively."
In an era where digital identity is everything, failing to govern AI agents isn't just a security lapse - it's an existential threat.