
AI in workplace tech: Challenges of client confidentiality
Artificial intelligence (AI) embedded in workplace technology like Microsoft Word, Outlook, and Teams may present legal risks associated with client confidentiality.
AI features have become a common aspect of office productivity software, intended to enhance efficiency by analysing emails, messages, video calls, and even capturing facial expressions and metadata. While these capabilities improve user experience, they also bring about significant privacy concerns. Companies, sometimes unknowingly, risk breaching confidentiality obligations through improper handling of sensitive data by AI tools.
Ryan Solomons, a Dispute Resolution Partner at RedeMont, who specialises in trade practices and intellectual property law, highlighted the potential pitfalls businesses might encounter.
"The key question is whether liability falls on the software provider or the business using these tools. In most cases, businesses bear the greater risk. Large tech companies often include indemnity clauses protecting themselves, exposing firms to potential privacy law violations and lawsuits.
"Courts will assess whether businesses took reasonable steps to protect confidential data and what was agreed regarding use; simply relying on a software provider's assurances without understanding how the technology works is insufficient," said Solomons.
The integration of AI also complicates the safeguarding of sensitive information under non-disclosure agreements (NDAs). AI tools or smart features might inadvertently transmit confidential information to third-party servers, putting businesses at risk of breaching NDAs. Industries such as healthcare, law, and finance, which heavily rely on confidentiality, may face particular challenges.
"For lawyers, client confidentiality is non-negotiable. Firms must conduct thorough due diligence to ensure compliance with professional obligations," Solomons added, emphasizing the importance of careful scrutiny in the legal field when utilising AI technologies.
Solomons suggests several strategies to mitigate the confidentiality risks associated with AI tools. These include monitoring software updates, designating a compliance officer, reviewing terms and conditions, avoiding public AI tools, and training employees on data security and AI-related risks. Such measures can assist in maintaining legal compliance and safeguarding sensitive information.
Though Australia has yet to experience significant legal cases concerning AI-induced confidentiality breaches, Solomons notes regulatory scrutiny is already underway. "It's only a matter of time before legal precedents are set. Businesses should act now rather than wait for enforcement actions taken against them or others dictate their compliance measures," he advises.
AI is becoming an integral part of business operations, and with its rise comes the necessity for companies to proactively assess their use of AI-powered tools. By implementing appropriate safeguards, businesses can utilise AI while still protecting client confidentiality and maintaining trust and legal compliance. RedeMont's Dispute Resolution team, under Solomons' leadership, offers legal guidance to help businesses navigate these challenges effectively in a continuously evolving digital landscape.