IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Navigating the shadow side of AI: Assessing the risks & rewards
Fri, 1st Dec 2023

A year after the release of ChatGPT, the AI landscape has seen significant growth, with over 10,000 AI tools now utilised across numerous sectors. Harmonic Security has recently assessed these tools to understand their functionality and, importantly, the potential risks they present for organisations.

ChatGPT, first launched by OpenAI, attracted over a million users within its first week. Since then, the platform has grown exponentially, boasting 100 million weekly active users and inspiring a myriad of applications based on the Large Language Model (LLM) technology. Claims suggest these innovations could enhance worker productivity by up to 40%.

Beyond the main players, the AI landscape consists of smaller AI-powered apps, commonly referred to as 'Shadow AI'. Largely unnoticed by organisations' security teams, these apps are of concern due to their lack of robust enterprise-level security. "Security teams often don't know which apps their employees use daily, posing significant risks to third-party risk and data security programs," warned Harmonic researchers.

Shadow AI covers a broad spectrum of functionalities, ranging from code assistance to customer service and content creation. Although these tools are promising in their ability to streamline work processes and improve overall productivity, the potential risks cannot be ignored.

One such risk is data leakage. With around 40% of these AI tools involving the uploading of content, code and files for AI assistance, concerns are raised about sensitive data being exposed unintentionally.

Data leakage is a significant worry among Chief Information Security Officers (CISOs), with 85.7% listing it as a top concern in AI adoption.

Privacy and data retention policies further complicate the matter. Policies differ among apps, and changes over time add to the complexity, leaving organisations vulnerable to risk. "If we understand the apps that are out there and being used, we can better understand some of the likely security issues and create policies and processes to reduce risk," Harmonic recommends.

The study also raises concern about other potential threats, including 'prompt injection' attacks - a method to manipulate LLMs into revealing sensitive information, and account takeover, both of which highlight how, without proper safeguards, AI can become a security liability.

In response to increasing AI adoption, 16 nations have pledged to create AI systems that are 'secure by design'. While this initiative signals a step in the right direction, security leaders within businesses must also be proactive to mitigate the risks associated with AI.

Harmonic suggests several practical steps for organisations, ranging from gaining a clear understanding of the Shadow AI landscape to refining and enforcing AI usage policies. Additionally, they recommend proactively keeping up to date with technological advancements and sharing best practices to navigate the rapidly evolving AI landscape effectively.

Despite the potential challenges, AI's rapid advancement since the introduction of ChatGPT offers immense potential. However, this growth demands the adoption of innovative security tools and processes to meet new and unanticipated challenges head-on.