IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Agentic AI adoption rises in ANZ as firms boost security spend

Today

New research from Salesforce has revealed that all surveyed IT security leaders in Australia and New Zealand (ANZ) believe that agentic artificial intelligence (AI) can help address at least one digital security concern within their organisations. According to the State of IT report, the deployment of AI agents in security operations is already underway, with 36 per cent of security teams in the region currently using agentic AI tools in daily activities—a figure projected to nearly double to 68 per cent over the next two years.

This surge in AI adoption is accompanied by rising investment, as 71 per cent of ANZ organisations plan to increase their security budgets in the coming year. While slightly lower than the global average (75 per cent), this signals a clear intent within the region to harness AI for strengthening cyber defences. AI agents are being relied upon for tasks ranging from faster threat detection and investigation to sophisticated auditing of AI model performance.

Alice Steinglass, Executive Vice President and General Manager of Salesforce's Platform, Integration, and Automation division, said, "Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant."

The report also highlights industry-wide optimism about AI's potential to improve security but notes hurdles in implementation. Globally, 75 per cent of surveyed leaders recognise their security practices need transformation, yet 58 per cent are concerned their organisation's data infrastructure is not yet capable of supporting AI agents to their full potential.

As both defenders and threat actors add AI to their arsenals, the risk landscape is evolving. Alongside well-known risks such as cloud security threats, malware, and phishing attacks, data poisoning has emerged as a new top concern. Data poisoning involves malicious actors corrupting AI training data sets to subvert AI model behaviour. This, together with insider threats and cloud risks, underscores the need for robust data governance and infrastructure.

Across the technology sector, the expanding use of AI agents is rapidly reshaping industry operations. Harsha Angeri, Vice President of Corporate Strategy and Head of AI Business at Subex, noted that AI agents equipped with large language models (LLMs) are already impacting fraud detection, business support systems (BSS), and operations support systems (OSS) in telecommunications. "We are seeing opportunities for fraud investigation using AI agents, with great interest from top telcos," Angeri commented, suggesting this development is altering longstanding approaches to software and systems architecture in the sector.

The potential of agentic AI extends beyond security and fraud prevention. Angeri highlighted the emergence of the "Intent-driven Network", where user intent is seamlessly translated into desired actions by AI agents. In future mobile networks, customers might simply express their intentions—like planning a family holiday—and rely on AI-driven networks to autonomously execute tasks, from booking arrangements to prioritising network resources for complex undertakings such as drone data transfers. This approach coins the term "Intent-Net", promising hyper-personalisation and real-time orchestration of digital services.

The rapid penetration of AI chips in mobile devices also signals the mainstreaming of agentic AI. Angeri stated that while only about 4 to 5 per cent of smartphones had AI chips in 2023, this figure has grown to roughly 16 per cent and is expected to reach 50 per cent by 2028, indicating widespread adoption of AI-driven mobile services.

However, industry experts caution that agentic AI comes with considerable technical and operational challenges. Yuriy Yuzifovich, Chief Technology Officer for AI at GlobalLogic, described how agentic AI systems, driven by large language models, differ fundamentally from classical automated systems. "Their stochastic behaviour, computational irreducibility, and lack of separation between code and data create unique obstacles that make designing resilient AI agents uniquely challenging," he said. Unlike traditional control systems where outcomes can be rigorously modelled and predicted, AI agents require full execution to determine behaviour, often leading to unpredictable outputs.

Yuzifovich recommended that enterprises adopt several key strategies to address these challenges: using domain-specific languages to ensure reliable outputs, combining deterministic classical AI with generative approaches, ensuring human oversight for critical decisions, and designing with modularity and extensive observability for traceability and compliance. "By understanding the limitations and potentials of each approach, we can design agentic systems that are not only powerful but also safe, reliable, and aligned with human values," he added.

As businesses across sectors embrace agentic AI, the next years will test the ability of enterprises and technology vendors to balance innovation with trust, resilience, and security. With rapid advancements in AI agent deployment, the industry faces both the opportunity to transform digital operations and the imperative to manage the associated risks responsibly.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X