IT Brief Australia - Technology news for CIOs & IT decision-makers
Elizabeth

Training lags behind AI use in Australian workplaces

Thu, 30th Apr 2026 (Today)

Plain English Foundation has released research on workplace use of generative AI across Australian organisations, finding that training on how to use the technology is lagging behind adoption.

The study surveyed 289 professionals across government, private sector and not-for-profit organisations to examine how people are using generative AI at work and the problems they face when applying it to writing tasks.

Almost 90% of respondents said they use generative AI at least once a month, while about one in three use it every day. Most also reported that their organisation already has a generative AI policy: 57.1% said one was in place and a further 16.3% said one was in development.

Use was concentrated in a small number of tools. Copilot was the most widely used at 67.8%, followed by ChatGPT at 35.6%. Fewer respondents reported using Gemini, Grammarly and Claude. Around one in 13 said their workplace had developed its own in-house AI tool.

Writing tasks

Workers are using AI mainly for writing support rather than fully autonomous document production. The most common uses were refining written expression, summarising long texts and brainstorming. About a third also said they use AI for research and for drafting emails and other documents.

That increase in use has not removed the need for manual review. Respondents identified cliched language, hallucinations, inappropriate tone, overly long prose and summaries that missed important information as the most common problems in AI-generated writing.

Dr Elizabeth Beach, Editor, Plain English Foundation, said human review remains central to workplace writing. "As AI becomes embedded in everyday writing tasks, human-driven plain language principles are more important than ever," Beach said.

She added: "Right now, we can't rely on gen AI to produce high-quality output that's error-free and sounds human, no matter how good the prompting. To do that, we need humans."

Confidence gap

The survey also pointed to mixed attitudes about the technology. When respondents were asked how confident they felt about AI accuracy and reliability, scores ranged from 0% to 92%, with an average of 45%.

That suggests many workers are using the tools regularly while still questioning the quality of the output. Views on AI's broader impact in the workplace were somewhat more positive, with average optimism at 53.3%.

One respondent described that ambivalence: "I am both concerned and excited. I am excited because I believe it is the future and has so much potential. For me it is like have an always-positive assistant with incredible knowledge... But I am also concerned ... for how it might undermine the learning stages of upcoming professionals. How do we pass on the cognitive skills and mindset necessary for our work when we might just outsource the work to AI? And how do we ensure we keep the nuance and humanity in the work? And if AI generates writing based on what is already available to it, how do we manage the bias when it is then using other AI-generated work as a basis?"

Another participant was more blunt about current limitations. "AI is still far too inaccurate and [unfortunately] real-world situations are being used as a testing ground."

Training shortfall

Despite widespread use, only 21.1% of respondents said they had completed training on how to use generative AI. Two in five said they wanted formal training, while about one-third said they were content to experiment on their own.

The figures point to a gap between formal governance and practical skills. While many workplaces appear to have put policies in place, fewer workers have received structured guidance on how to use the tools in ways that improve writing quality and reduce errors.

Plain English Foundation said the findings show training now needs to extend beyond prompt-writing and software familiarity. It argued that clear communication, alongside critical and analytical thinking, is necessary if staff are to assess AI output properly and remain responsible for what they publish or circulate.

Yusuf Pingar, GM, Plain English Foundation, said clients were struggling to improve the standard of AI-generated documents. "Once people know the fundamentals of clear communication, it's much easier to know how to work with AI to improve the quality of the output," Pingar said.

He added: "Clients tell us they're struggling to get quality outputs from AI, so we show them how to improve their prompts, how to critically assess their AI outputs and how to polish the expression to ensure a top-quality final document."

Pingar said faster drafting also brings risks for organisations if review standards slip. "AI tools have sped up the writing process, and the pressure to write even faster is building every day. But that brings the risk of errors, poor quality writing and damaged reputations. Our philosophy is to slow down, take the time to get your team trained in clear communication, and start getting better outputs from AI. That's when you'll really start to see the productivity gains kick in."