IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

High AI usage at Antipodean workplaces despite lack of official permission

Mon, 13th Nov 2023
FYI, this story is more than a year old

A recent study reveals high employee usage of generative Artificial Intelligence (AI) in Australia and New Zealand, with 63% using it actively in their workplaces. However, only 36% of organisations have official permissions for AI deployment, and only 11% have a formal policy regarding its use, the ISACA Pulse Poll indicates.

The survey also found that 80% of responding organisations offered no or limited training in the use of AI. A startling 97% of participants noted their concern about the potential misuse of AI technology by malicious entities.

The ISACA study detailed the variety of uses for AI among Antipodean employees. These included creating written content (51%), increasing productivity (37%), automating repetitive tasks (37%), enhancing decision-making processes (29%), and improving customer service (20%).

Jo Stewart-Rattray, Oceania Ambassador at ISACA, emphasised the need to manage the risks associated with AI without curbing innovation. She called the current developments a "significant" opportunity for digital trust professionals in Australia. She stressed the need for organisations to prioritise governance frameworks around AI to address ethical, privacy, and security concerns.

Ms Stewart-Rattray added: "What we need to do is put guardrails around the use of AI to ensure the security of corporate data and to ensure there are formal governance guidelines in place."

Despite only 36% of organisations in Australia and New Zealand permitting formal use of AI, 63% of respondents said employees were using it, even without a policy. Additionally, 21% of respondents said no policy currently exists, and there are no plans for one.

While employees are quick to adopt AI, training still lags behind. A mere 4% of respondent organisations offer AI training to all staff, with 57% offering no training at all, even to areas directly impacted by AI. This lack of training occurs despite just 32% of respondents indicating a high level of familiarity with generative AI.

Jason Lau, Board Director and CISO of ISACA noted that better organisational alignment with employees around AI, guided by suitable policies and training, will lead to increased understanding and lower potential risks.

Risks related to misinformation and disinformation (90%), loss of intellectual property (68%), and privacy violations (64%) were cited as top concerns regarding AI. Respondents also raised concerns about adversaries using AI; 70% believed that adversaries deployed AI with equal or greater success than digital trust professionals.

John De Santis, Board Chair of ISACA, underlined the importance of leaders understanding the technology's risks and benefits and sharing this knowledge with their teams, given the rapid pace of AI development.

While risks remain, optimism is prevalent among the digital trust community, with 76% of respondents believing AI will have positive or neutral effects on their industry. Likewise, 79% feel it will be positive or neutral for their organisations and 85% for their roles. Respondents also see AI as a tool to enhance human productivity, with 86% endorsing this view and 66% believing it will have a largely beneficial or neutral social impact.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X