IT Brief Australia - Technology news for CIOs & IT decision-makers
Australia
Why workplace AI is creating a quiet legal risk most businesses haven't caught up with yet

Why workplace AI is creating a quiet legal risk most businesses haven't caught up with yet

Fri, 8th May 2026 (Today)
Lauren McKee
LAUREN MCKEE Practice Leader LegalVision

It doesn't take much convincing to get businesses excited by AI. The ability to increase efficiency, decrease administrative burdens, and boost productivity in all sorts of tasks is just too enticing. But while companies continue to embrace AI technology, many businesses have forgotten to implement appropriate policies that could help reduce the risk that comes with its adoption. 

Employees are already applying artificial intelligence in workplaces across Australia for tasks such as writing emails, summarising documents, analysing data, and making decisions. However, most of these actions occur without clear policies, guidelines, and training procedures being in place.

This mismatch of rapid AI adoption and poor governance creates serious exposure for businesses.

According to recent research, 68% of workers in Australia use AI at work, while 72% express concerns about data-related risks. However, fewer than one in three respondents report receiving any form of formal training in this area.

Importantly, the problem usually does not stem from employee misconduct or malicious activity.

In many situations, employees copy and paste customer contacts, internal reports, client contracts, and other company documents into AI tools to improve wording, summarise key points, or generate answers to clients' questions. Although there is no malice in such practices, intent is not what creates liability.

Indeed, the biggest risk stems from employees inadvertently disclosing confidential or private information.

When this information gets input into public AI software, it may be saved by the provider, sent offshore, or used to train AI models. This poses a risk of breaching confidentiality obligations and non-compliance with the Privacy Act 1988.

Most employees simply fail to understand the implications of submitting data to AI software. Alongside potential violations of privacy legislation, businesses face increased risks from inaccuracies or unverified outputs generated by AI solutions.

While some AI outputs are completely wrong or misleading, they may appear credible enough to use in your client work or inform decision-making processes. In these scenarios, the consequences go way beyond low efficiency; you're exposed to legal and business risks instead.

There are also intellectual property issues to consider, which centre on AI-generated works and the potential influence of confidential material on AI outputs.

However, enforcing AI policies may be tricky when no such policies are currently in place.

Without a clear statement of expected practices, it will be difficult for a business to hold its employees accountable for improper use of AI. Moreover, this may make it difficult to enforce rules and create grounds for disciplinary action.

At LegalVision, we're already witnessing more internal conflicts among our clients regarding AI.

Even though there are few official claims against employers yet, AI-related problems and disputes in workplaces are becoming increasingly common. These include employees uploading sensitive information into AI tools or relying on incorrect AI-generated outputs during their client work, leading to performance management decisions or terminations.

Similar patterns can be observed in the adoption of other technological solutions.

The early stage of AI implementation is usually marked by internal issues that gradually escalate into official disputes and employee claims.

But problems become worse if employees receive disciplinary action for violating an unenforced AI policy.

Employees may dispute these decisions by stating they were unaware of the relevant rules, making employers liable for unfair dismissal claims. Inconsistent approaches to enforcing AI policies across different teams increase the risk even further.

Failure to govern AI effectively carries additional risks.

Data incidents may trigger privacy breach notifications, which can attract regulators' attention and result in costly investigations. Breaches of confidentiality may give rise to contract claims. Relying on faulty AI outputs may result in financial loss. And finally, a business may face reputational risks from the improper use of AI.

But restricting AI is not a solution.

Businesses that manage AI well are not the ones that avoid this technology, but those that set clear parameters for its use.

An effective AI policy:

  • includes a list of approved AI tools
  • prohibits the input of confidential and personal data into external tools; 
  • requires verification of AI outputs before use; and 
  • addresses other relevant aspects, including data security, intellectual property ownership, recordkeeping, and alignment with current privacy and IT policies.