IT Brief Australia - Technology news for CIOs & IT decision-makers
Craig nielsen  vp  apj  gitlab

AI's next phase in Australia: 4 Predictions for 2026

Mon, 8th Dec 2025

AI has entered a new phase in Australia, transitioning beyond experimentation and early adoption into full-scale implementation and accountability. This shift is happening fast: 84% of executives in Australia are now willing to allocate more than half of their annual IT budget to innovation, and 88% have adopted frameworks that tie development directly to business outcomes (source).

As technology leaders integrate agentic AI into software development, cloud operations, and cybersecurity, they are recognizing the urgent need for greater visibility, governance, and oversight. Increasing expenses and fragmented implementations are driving executives to reconsider their approach to managing, measuring, and protecting AI.

With this in mind, Craig Nielsen, VP APJ at GitLab, shares his insights about the AI trends we can expect in 2026, so leaders can plan for success.

Holistic AI agent visibility will become a business imperative

Over the next year, organisations will realise they need visibility into the agents running across their entire network, as teams spin up AI systems from development tools, cloud platforms, and countless other sources without centralised oversight. Agent platforms that can discover and catalog these distributed AI systems will emerge as the clear winners in the enterprise market.

This shift will be driven by a practical business imperative: as agents increase system usage and computing costs, organisations will demand clear ROI tracking and qualification for their AI investments. Companies will stop treating agent deployments as an untracked experiment and start requiring the same financial accountability they expect from any other enterprise technology.  The most successful organisations will implement agent discovery platforms that provide visibility into what agents are running, what resources they're consuming, and whether they're delivering measurable business value.

Human-centric only identity systems will fail in an agent-to-agent world

Organisations will also confront an access and permissions crisis as agent-to-agent interactions expose the limitations of traditional access-control systems. Unlike human users or simple automations, agentic AI systems communicate with each other, delegate tasks, and make decisions that cascade across multiple systems, exposing gaps in traditional composite identity solutions. Existing permission frameworks can break down when one agent tells another what to do, because they were designed for individual human actors, not autonomous systems that act on behalf of other autonomous systems.

Leaders will demand unprecedented visibility into where their data flows and how systems use it as agents proliferate across their technology stack. While some companies are implementing short-term fixes such as assigning personas to agents, these approaches treat agents like employees rather than addressing the fundamental governance challenge. Organisations that continue to use human-centric permission models will find themselves unable to trace decision-making chains, audit agent actions, or maintain security as their AI systems become increasingly interconnected and autonomous.

Organisations must accept that this will require rethinking identity and access management from the ground up. They should assemble cross-functional teams to design governance frameworks built for autonomous systems, rather than retrofitting human-centric models. The window to get ahead of this is narrow, as once agent ecosystems become deeply interconnected, redesigning foundational frameworks becomes exponentially harder.

Security teams will prioritise implementing clearly defined, high-impact AI use cases

AI has proven its value to security teams by reducing false-positive rates and streamlining security operations (source). Successful implementations often start with clearly defined, high-impact use cases. For example, log analysis that would overwhelm human analysts, network pattern recognition for novel threats, vulnerability prioritisation based on actual exploitability, and automated incident triage to reduce alert fatigue. These are a handful of areas where security teams have found success, but each organisation will need to identify their primary sources of friction caused by toil and then pursue holistic improvements.

Once a team has identified the correct use cases, the implementation approach will be just as important. Security's priority should be documenting all institutional knowledge across their department. This is because AI agents need clear direction. Without company-specific context, they will only deliver technical debt. This documentation will also help to strengthen and standardise internal security processes.

Strategic AI-human collaboration will define competitive advantage in 2026

The winners won't be the companies that adopt AI fastest. They'll be the ones most intentional about what they assign to AI versus humans. With 90% of executives expecting agentic AI to become standard within three years (source), the real differentiator will be knowing exactly which tasks benefit from human creativity and judgment versus which should be automated.

Organisations that nail this calibration will create compounding advantages, freeing developers to focus on high-value architectural decisions and strategic thinking while AI handles code generation and routine maintenance. Companies that get the balance wrong face a double penalty: wasted human talent on automatable work, and AI making decisions that require nuanced judgment.

The developer role is fundamentally shifting from writing every line of code to becoming system architects and AI orchestrators, breaking down complex challenges and coordinating multiple agents. Teams that embrace this evolution will build faster, innovate more effectively, and attract top talent seeking cutting-edge human-AI collaboration.

The next era of AI in Australia

Companies that know which agents are working, what resources they use, and what results they deliver will turn AI from a test run into a reliable business tool. At the same time, rules and controls will improve, helping companies track, secure, and explain how these automated systems make decisions.

Understanding AI isn't just about the technology. The best companies will see AI as a partner, knowing when people need to make decisions versus when automation works better. Ultimately, those who get this balance right will move faster, create better solutions, and build workplaces based on responsibility, new ideas, and trust.