IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

The rise of agentic AI and what it means for ANZ enterprise

Today

As much as we don't want to think about it or admit it, a lot of our time can be spent on tedious, repetitive tasks that eat away at our precious time and mental energy, preventing us from focusing on the truly strategic work—but agentic AI is changing that equation, and research has found that this technology is rapidly taking off in Australia and New Zealand. According to a study from YouGov and Salesforce, 69% of ANZ c-suite executives who prioritise AI are focused on implementing agentic AI over the next 12 months, and 38% say that they're already implementing the technology.

Agentic AI is seen by many as the new frontier of AI innovation, and that's because these agents can automate tedious or repetitive processes without direct prompting from a human user, which opens up a wide array of possible applications. An AI agent could, for example, provide expert-level advice to customers, perform administrative work for finance or HR departments, or execute complex data analysis, among other potential use cases. In order to adopt AI agents securely and efficiently, however, organisations across ANZ and beyond will have to do more to secure and optimize the data that powers agentic tools. Without strong data security and governance, agents won't work effectively or securely, which can harm productivity and create unnecessary risk.

What is agentic AI? Setting the record straight

What is an AI agent? Microsoft defines it as an "[application] that automate and executes business processes, acting as [a] digital colleague to assist or even perform tasks on behalf of users or teams." Salesforce, meanwhile, calls it a "type of artificial intelligence (AI) that can operate independently, making decisions and performing tasks without human intervention," and IBM calls it "an artificial intelligence system that can accomplish a specific goal with limited supervision."

While these definitions might not be perfectly identical (and there's definitely been some healthy debate in the industry!), the core concept is consistent: an AI agent is an AI system that can act intelligently and autonomously, without direct, continuous prompting from a human. It's this autonomy and advanced reasoning power that truly sets them apart from AI assistants like ChatGPT, Google Gemini, or Microsoft 365 Copilot.

Think of it this way: an assistant helps you write, while an agent writes the report for you. This opens up a world of possibilities: expert-level customer advice, automated administrative work for finance or HR, or even executing complex data analysis on its own. For example, just this week I asked an AI Agent to put together a report for me comparing software product features against an international standard and then provide suggestions for additional functionality. This saved me about three days of research and I could spend that valuable time analysing the results.

Why stronger data governance makes better, safer AI agents

Agentic AI has unique benefits, but it also presents unique risks, and as more organisations adopt agentic AI, they're discovering that robust data governance— the establishment of policies, roles, and technology to manage and safeguard an organization's data assets—is essential when it comes to ensuring that these systems function securely and effectively. That's why, according to a recent study from Drexel University, 71% of organizations have data governance program, compared to 60% in 2023.

Effective governance is on the rise because it helps address critical AI-related security and productivity issues like preventing data breaches and reducing AI-related errors. Without strong data governance measures, agents may inadvertently expose sensitive information or make flawed autonomous decisions. With strong data governance measures, organisations can proactively safeguard their data by implementing comprehensive governance policies and deploying technologies to monitor AI runtime environments. This not only enhances security but also ensures that agentic AI tools operate optimally, delivering significant value with minimal risk.

Key elements of this approach include:

• Securing data without a human in the loop: Agents rely on the data they consume and often don't have a human in the mix to ensure that data is consumed and dispensed correctly. This means that it's crucial that this data is accurately categorized to ensure relevance and mitigate risks. When a human isn't in the loop, strong data governance measures can step in to ensure that AI agents can access or repeat sensitive data.

• Preventing errors and breaches: Robust governance frameworks help agents avoid "hallucinations"—instances where AI generates incorrect information—and protect sensitive content from accidental exposure by improving the quality of AI data. This significantly lowers the chances of autonomous agents making harmful decisions.

To grapple with these and other AI-related challenges, Gartner now recommends that organisations apply its AI TRiSM (trust, risk, and security management) frameworks to their data environments. Data and information governance are a key part of this framework, along with AI governance and AI runtime inspection and enforcement technology. The very existence of this new framework underscores the immense potential—and the equally immense risks—of Agentic AI.

Securing the future with AI

The future of work is here, and it's powered by Agentic AI. While the wave of adoption is clearly building across ANZ, organisations must prioritise robust data security and governance. This isn't just about managing risk; it's about optimising the data that fuels these powerful tools, ensuring they work effectively and securely. Organisations cannot afford to be left behind so more needs to be done to ensure risks are managed and this powerful tooling will be effective.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X