IT Brief Australia - Technology news for CIOs & IT decision-makers
Australian office ai brain above chaotic cables and locked servers

Australia's AI boom outpaces data skills & governance

Wed, 28th Jan 2026

Informatica has published a global study that points to a widening gap between Australian organisations' rapid adoption of generative AI and the data skills, governance and infrastructure required for responsible deployment at scale.

The research surveyed 600 global data leaders across the US, the UK and Europe, and Asia-Pacific, including Australia. It found that 62% of Australian organisations have already adopted generative AI in business practices.

The findings also describe what Informatica calls foundational weaknesses. Respondents reported issues spanning data reliability, workforce training, governance approaches and modernisation priorities for data security and infrastructure.

Data reliability

The study found that concerns about data reliability remain a major barrier for Australian organisations moving generative AI initiatives from pilot phases into production environments. Ninety-two percent of Australian respondents said data reliability was a barrier to moving more generative AI initiatives from pilot to production.

The same theme appeared in responses about AI agents. Forty-seven percent of Australian respondents cited data quality and reliability as a key challenge in deploying AI agents into production.

The results also highlighted a mismatch between perceived trust and reported issues. Seventy-five percent of Australian data leaders said most or nearly all employees trust the data used for AI, despite widespread concerns about reliability as a blocker for production rollouts.

Skills gaps

Australian respondents also pointed to workforce training as an immediate issue. Seventy-seven percent said they were concerned about the need for better data literacy training.

AI literacy also emerged as a priority. Seventy-five percent of respondents said employees require more AI literacy training to use the technology responsibly in day-to-day operations.

Governance approaches

The study indicated that Australian organisations take varied approaches to AI governance. It found a split between extending existing frameworks and adopting new tooling.

Fifty-one percent of Australian organisations said they extend existing data-governance tools to cover AI. Thirty percent said they invest in discrete AI governance tools. Nineteen percent said they started governance efforts from scratch.

The mix of approaches points to uneven readiness as adoption increases. It also suggests differences in how organisations assess risk, accountability and oversight for AI systems.

Modernisation priorities

Infrastructure and security modernisation ranked low among near-term priorities for Australian respondents. Only 8% identified modernising data security and infrastructure as a top near-term priority.

The study positioned that result against the pace of AI adoption. It suggested that foundational systems may not keep pace with increased use of AI tools and deployments into production settings.

Investment intentions

Despite the challenges described in the research, nearly all Australian data leaders said they plan to raise spending on data management. Ninety-eight percent said they plan to increase data management investment in 2026.

Respondents cited several drivers for this spending. These included improving data literacy and AI fluency, strengthening privacy and security measures, enhancing data and AI governance, and meeting changing regulatory requirements.

One enterprise leader said the findings align with risks seen during implementation. "This report highlights the significant risks of accelerating AI adoption without strong data governance and literacy. At RS Group, we address this challenge by embedding governance and accountability into how we evaluate and scale AI initiatives," said Amanda Fitzsimmons, Senior Director of Customer Data, RS Group.

Fitzsimmons described an internal assessment process that considers multiple factors. "For all AI initiatives, we thoroughly evaluate the technological, security, legal, and strategic implications to maximise opportunities while minimising risks. This approach helps ensure innovation moves forward responsibly, with risks understood and value clearly defined from the outset," said Fitzsimmons.

She also linked progress to investment and external collaboration. "Through investments in robust data-driven solutions, comprehensive upskilling, and close collaboration with partners like Informatica, we believe we are taking the essential steps to foster trusted, responsible AI that delivers real, measurable value to our customers and employees," said Fitzsimmons.

Trust gap

Informatica's local leadership framed the results as a mismatch between confidence in AI and the underlying data environment.

"Australia has set clear ambitions for how it wants to use AI to drive growth, productivity and competitiveness, but our latest study points to clear trust paradox," said Alex Newman, Country Manager, Australia and New Zealand, Informatica. "As organisations rely more heavily on AI, trust in its outputs is rising faster than the data foundations, governance and skills needed to support that reliance."

Newman also referenced government direction on AI use and safety. "With the National AI Plan underscoring the importance of capturing AI's benefits while keeping people and organisations safe, closing that trust gap is critical. Reliable data, strong governance and a workforce that understands how to use AI responsibly will ultimately determine whether AI delivers long-term value or introduces new risk," said Newman.