IT Brief Australia - Technology news for CIOs & IT decision-makers
Illustration office worker computer data streams cloud sensitive data transfer modern australian office

One in three Australians upload sensitive data to AI tools, report finds

Thu, 4th Sep 2025

Over a third of Australian professionals are uploading sensitive company information to artificial intelligence platforms, according to a recent report by Josys.

The Shadow AI Report 2025 reveals that 36% of employees have inputted confidential data, including strategic plans (44%), technical data (40%), financials (34%), and internal communications (28%), into AI tools that may not have formal oversight. Additionally, 24% admit to sharing customer personally identifiable information (PII), while 18% upload intellectual property and legal or compliance documents.

The report, based on a survey of 500 technology decision makers across Australia, highlights a rapid increase in the use of so-called "shadow AI" - the unsanctioned adoption of AI tools by employees without company approval or adherence to established protocols. Josys warns this trend is exposing organisations to significant compliance and security risks, particularly as regulatory scrutiny heightens.

User confidence and oversight

While 78% of professionals now use AI tools in their work, the report identifies that 63% of these users are not confident they can use the technologies securely. Furthermore, 70% of organisations report having moderate to no visibility into which AI platforms are being utilised across their operations.

Smaller businesses appear especially vulnerable. Only 30% of companies with fewer than 250 staff believe they are fully equipped to assess and manage the risks associated with AI adoption, compared with 42% of larger companies. The perceived gap between organisational preparedness and AI proliferation is most pronounced in sectors with high regulatory burden, such as finance, IT/telecommunications, and healthcare.

"Shadow AI is no longer a fringe issue. It's a looming full-scale governance failure unfolding in real time across Australian workplaces," said Jun Yokote, COO and President of Josys International. "While the nation is racing to harness AI for increased productivity, without governance, that momentum quickly turns into risk. Productivity gains mean nothing if they come at the cost of trust, compliance, and control. What's needed is a unified approach to AI governance which combines visibility, policy enforcement, and automation in a single, scalable framework."

Departmental trends

The report found that sales and marketing departments present the highest risk, with 37% of employees in these teams uploading sensitive data to AI systems. Finance and IT/telecoms departments followed closely at 36%, and healthcare at 31%. The findings highlight the widespread adoption of AI-driven productivity tools across numerous functions, often with little comprehensive oversight.

Compliance challenges

Regulatory and compliance concerns are rising, with 47% of respondents citing upcoming AI model transparency requirements and amendments to the Australian Privacy Act as major hurdles. Despite these new demands, half of organisations surveyed continue to rely on manual policy reviews for AI use, and a third have no formal governance framework in place. Where oversight exists, only 25% believe their enforcement tools are highly effective.

The survey indicates a growing disconnect between regulatory compliance obligations and an organisation's capability to meet them. As AI usage accelerates, particularly in critical sectors and among smaller organisations, many are struggling to develop or implement effective policy enforcement and oversight mechanisms. This has led some institutions into cycles of reactive, rather than proactive, management of AI-related risks.

According to Josys, there is an urgent need for coordinated and immediate action to close visibility gaps and improve oversight across all Australian organisations leveraging AI technologies. Recommendations include auditing AI use across all departments, automating risk assessments based on the sensitivity of data and business function, enforcing real-time policies that align with role-based access and organisational risk tiers, and ensuring readiness for audits with AI-specific compliance reporting.

"Shadow AI" and the rapid spread of unsanctioned technology use have emerged as widespread issues at a time when economic pressures and job demands have driven employees to seek greater productivity through AI, often at the expense of security and compliance frameworks. The report notes that, without foundational governance practices in place, businesses risk both regulatory breaches and loss of long-term trust and resilience.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X