Why agentic AI is the game-changer SOCs need
Australia's cybersecurity landscape is under a very different kind of pressure than it was even a year ago. Latest figures from the Australian Signals Directorate (ASD) shows just how quickly things are shifting - more than 42,500 hotline calls, over 1,200 incidents responded to and an 83 per cent rise in malicious-activity notifications, figures that reflect where attackers are innovating faster than most organisations can reasonably respond.
For CISOs, the challenge is not just about the sheer number of threats but the complexity sitting behind them. Conversations with cybersecurity leaders consistently point to how quickly the nature of attacks has evolved. Intrusions that once relied on broad, opportunistic tactics are now shaped by automation and AI, allowing adversaries to craft highly targeted phishing messages, convincingly impersonate executives and adapt malware mid-campaign.
A number of recent Australian cybersecurity incidents have shown that AI-generated content is now being used to make common attack methods more effective. As a result, there is a noticeable gap emerging between the speed of these attacks and the time required for Security Operations Centre (SOC) teams to piece together the full picture.
Inside today's SOC
Walk into almost any SOC in Australia and you see a team working with a level of intensity that has slowly but steadily become the norm. Analysts begin their day absorbing queues of overnight alerts, each carrying its own set of questions: Is this noise? Is this the start of a breach? How deep does this go?
The environments they monitor are sprawling, on-premises systems mixed with cloud workloads, legacy apps intertwined with modern identity platforms, and every investigation requires stitching those layers together in ways that are rarely straightforward.
According to Splunk's State of Security 2025 Report, over half (59%) of global cybersecurity respondents surveyed (59%) say they spend more time maintaining tools than defending the business. A majority (59%) also report being overwhelmed by alert volume, while 52% admit their teams are exhausted or at risk of burnout.
In Australia, where many SOCs are smaller and carry responsibility for increasingly complex environments, the strain is even more pronounced. Analysts are not struggling because they lack competence or commitment; they are struggling because the work has grown faster than the capacity to interpret it.
Much of this pressure stems from the sheer amount of human work that sits behind every incident. Before an analyst can form an opinion, they must review logs from multiple systems, correlate identity behaviour, trace suspicious movements, reference threat-intelligence indicators and build a coherent timeline - all while the adversary may be continuing to move. The work is important, but it is also time-consuming, and time is the one element Australia's SOCs feel they have less of every month.
The role of agentic AI
Agentic AI is beginning to reshape how investigations unfold inside the SOC, not by taking over the work but by changing where human effort is spent. It can sift through logs, trace behaviours across different systems, form early hypotheses and pull together the initial picture of what has occurred.
Instead of starting from scratch, analysts begin with a structured view of what transpired and can focus their attention on validating assumptions, challenging anomalies and deciding what to do next.
In practice, it changes the rhythm of an investigation. A suspicious login at 2am no longer sits waiting for someone to arrive and make sense of it; an AI agent can analyse the surrounding activity, link related events and prepare a first-pass narrative before the morning shift begins. Analysts remain firmly in the loop, but they enter the process at the stage where judgement matters most, not where administrative legwork slows everything down.
At the same time, agentic AI is still early in its evolution. It brings speed and scale, but it doesn't yet have the organisational awareness, risk intuition or business context that experienced analysts carry. It is capable and genuinely useful, but not ready to be left entirely unsupervised.
The value increases further when AI can draw on context beyond a single organisation. Through the Cyber Threat Intelligence Sharing (CTIS) initiative, led by the Australian Signals Directorate, AI-driven investigations can be informed by patterns emerging across trusted networks nationwide. Recent integrations, including the new CTIS plug-in that connects Splunk Enterprise Security directly to the CTIS platform, now allow participating organisations to share and receive threat intelligence at machine speed. That broader visibility gives AI a stronger starting point and helps SOC teams understand earlier whether a suspicious behaviour is isolated or part of something unfolding more widely.
The pressures facing Australia's SOCs won't ease on their own, and neither will the pace of attacker innovation. Agentic AI offers a way to expand the capacity of security teams without compromising the human insight that underpins good decision-making.
The organisations that succeed will be the ones that move early, experiment responsibly and build the guardrails that let AI and analysts work together. For CISOs navigating an environment shaped by speed and complexity, that balance may prove to be the defining advantage.