Government hesitation curbs GenAI adoption – Gartner
Generative artificial intelligence (GenAI), despite its potential, sees a hesitant approach from government organisations worldwide. Renowned global research and advisory firm, Gartner, predicts that GenAI-enabled citizen-facing services will feature in less than 25% of these organisations by 2027. Anxiety of public failure and a significant lack of community trust in the use of this technology by government bodies appear to be the principal obstacles.
This global trend is similarly mirrored in Australia, as suggested by Dean Lacheca, VP Analyst at Gartner. "Australian federal, state and local governments have followed global patterns in the cautious enthusiasm for the potential of GenAI through 2023 and into 2024. Though many have longer-term ambitions for using the technology to improve citizen services, their immediate priorities have been to mitigate the security and privacy risks of the technology and to look for areas where it can deliver operational efficiencies," states Lacheca.
In its annual worldwide survey involving over 2,400 CIOs and technology executives, Gartner revealed that 25% of governments have either deployed or plan to deploy GenAI within the forthcoming 12 months. An equivalent percentage intends to deploy within the next 24 months. The preliminary focus seems to centre on generating an initial governance framework to support experimentation and narrow adoption.
Despite seeing successful implementation of mature AI technologies for years, governments are contending with increased risk and uncertainty surrounding GenAI's large-scale adoption. Lacheca elaborates further, "Risk and uncertainty are slowing GenAI's adoption at scale, especially the lack of traditional controls to mitigate drift and hallucinations. In addition, a lack of empathy in service delivery and a failure to meet community expectations will undermine public acceptance of GenAI's use in citizen-facing services."
To mitigate these perceived risks, Gartner recommends that governments wisely align their GenAI adoption pace with the tolerance for risk. This ensures early technological discrepancies do not negatively impact public approval, setting a precedent for back-office tasks progressing faster than those that directly serve citizens.
Lacheca suggests that governments could fast-track the adoption of GenAI by focusing on use cases that predominantly impact internal resources, thereby evading perceived risks tied to citizen-facing services. Transparent AI governance and assurance frameworks need to be established for both internally developed and procured AI capabilities to build trust among the public.
"These frameworks need to specifically address risks associated with citizen-facing service delivery use cases, such as inaccurate or misleading results, data privacy and secure conversations. This can be done by ensuring governance processes specifically address each risk both before and after the initial implementation," advises Lacheca.
Government organisations are also encouraged to employ an empathy-based practice of human-centred design when executing citizen or workforce-facing AI solutions. This method keeps the solutions aligned with community expectations regarding when and how the technology should be used from a citizen-facing perspective.