IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Most firms overestimate AI governance as privacy risks surge

Today

Kiteworks has released its AI Data Security and Compliance Risk Survey, highlighting gaps between AI adoption and governance maturity in the Asia-Pacific (APAC) region and globally.

The survey, based on responses from 461 cybersecurity, IT, risk management, and compliance professionals, reveals that only 17% of organisations have implemented technical controls that block access to public AI tools alongside data loss prevention (DLP) scanning. Despite this, 26% of respondents state that over 30% of the data employees input into public AI tools is private, and 27% confirm this figure specifically for the APAC region.

These findings appear against a backdrop of rising incidents; Stanford's 2025 AI Index Report recorded a 56.4% year-on-year increase in AI privacy incidents, totalling 233 last year. According to the Kiteworks survey, only 40% of organisations restrict AI tool usage via training and audits, 20% rely solely on warnings without monitoring, and 13% lack any specific policies, leaving many exposed to data privacy risks.

A disconnect between adoption and controls

"Our research reveals a fundamental disconnect between AI adoption and security implementation," said Tim Freestone, Chief Strategy Officer at Kiteworks. "When only 17% have technical blocking controls with DLP scanning, we're witnessing systemic governance failure. The fact that Google reports 44% of zero-day attacks target data exchange systems undermines the very systems organisations rely on for protection."

The survey indicates a persistent overconfidence among organisations regarding their AI governance maturity. While 40% of respondents say they have fully implemented an AI governance framework, Gartner's data shows only 12% of organisations possess dedicated AI governance structures, with 55% lacking any frameworks.

Deloitte's research further highlights this gap, showing just 9% achieve 'Ready' level governance maturity despite 23% considering themselves 'highly prepared'. This discrepancy is compounded by industry data indicating that 86% lack visibility into AI data flows.

EY's recent study suggests that technology companies continue to deploy AI at a rapid pace, with 48% already using AI agents and 92% planning increased investment—a 10% rise since March 2024—with 'tremendous pressure' to justify returns, thereby elevating incentives to adopt AI quickly but at the expense of security.

"The gap between self-reported capabilities and measured maturity represents a dangerous form of organisational blindness," explained Freestone. "When organisations claiming governance discover their tracking reveals significantly more risks than anticipated according to Deloitte, and when 91% have only basic or in-progress AI governance capabilities, this overconfidence multiplies risk exposure precisely when threats are escalating."

Legal sector and policy awareness

According to survey data, the legal sector exhibits heightened concern about data leakage, with 31% of legal professionals identifying it as a top risk. However, implementation lags are evident, with 15% lacking policies or controls for public AI use and 19% relying on unmonitored warnings. Only 23% of organisations overall have comprehensive privacy controls and regular audits before deploying AI systems.

Within legal firms, 15% had no formal privacy controls but prioritised rapid AI uptake – an improvement over the 23% average across sectors, but still significant in a sector where risk mitigation is fundamental. Thomson Reuters figures support this, reporting that just 41% of law firms have AI-related policies, despite 95% foreseeing AI as central within five years.

Security controls and data exposure in APAC

APAC organisations closely mirror global patterns, with 40% relying on employee training and audits, 17% utilising technical controls with DLP scanning, and 20% issuing warnings with no enforcement. Meanwhile, 11% provide only guidelines, and 12% have no policy in place. This means that 83% lack automated controls, despite the APAC region's position at the forefront of the global AI market.

The exposure of private data follows global trends: 27% report that more than 30% of AI-ingested data is private, 24% report a 6–15% exposure rate, and 15% are unaware of their exposure levels. A slight improvement in visibility is indicated, which may reflect regional technical expertise.

For AI governance, 40% of APAC respondents claim thorough implementation, 41% say partial implementation, while 9% have no plans, and 3% are planning to implement controls.

Regulatory complexity and cross-border risks

APAC's position involves navigating a complex landscape of national regulations, including China's Personal Information Protection Law, Singapore's PDPA, Japan's APPI, Australia's Privacy Act reforms, India's draft Digital Personal Data Protection Act, and South Korea's PIPA. The survey highlights that a 60% visibility gap in AI data flows in the region is particularly challenging, given the region's diversity, which limits the ability to comply with data localisation, cross-border data transfer rules, and consent requirements.

Weak controls in APAC expose organisations to difficulties in monitoring compliance with China's data localisation regulations, managing Singapore-Australia digital agreements, and knowing how AI tools route data through restricted jurisdictions.

Organisational strategies and gaps

Regarding privacy investment, 34% of organisations employ balanced approaches that involve data minimisation and the selective use of privacy-enhancing technologies. Some 23% have comprehensive controls and audits, while 10% maintain basic policies but focus on AI innovation, and another 10% address privacy only when required by law. Meanwhile, 23% have no formal privacy controls while prioritising rapid AI adoption.

Kiteworks recommends that businesses recognise the overestimation of their governance maturity, deploy automated and verifiable controls for compliance, and prepare for increasing regulatory scrutiny by quantifying and addressing any exposure gaps.

"The data reveals organisations significantly overestimate their AI governance maturity," concluded Freestone. "With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing."
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X