IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Australia pauses AI regulation plans amid safety concerns

Mon, 31st Mar 2025

The Federal Government of Australia has paused plans to introduce mandatory regulatory guardrails for Artificial Intelligence (AI), sparking concerns from Electronic Frontiers Australia (EFA) regarding the potential risks to Australians' safety, privacy, and rights.

Electronic Frontiers Australia, a not-for-profit organisation championing digital rights, has expressed apprehensions about the government's decision, which seems to align with international tendencies favouring AI deregulation. EFA interprets this shift as moving towards a Big Tech-friendly landscape, as seen in the United States, prioritising urgency, productivity, and innovation over stringent AI safety measures.

According to EFA, Australia needs to develop an AI regulatory framework that addresses its citizens' unique needs and rights rather than merely harmonising with international standards. The organisation voiced these concerns following reports from Innovation Aus, which indicated the Australian government's reluctance to finalise its AI regulatory framework amidst a global trend that appears to downplay AI safety.

John Pane, EFA Chair, said, "Artificial Intelligence is a transformative technology with immense potential, but without robust safeguards, it can also amplify risks—ranging from algorithmic bias to privacy violations and even threats to democratic processes."

"Australia cannot afford to delay enshrining AI safety and risk guardrails into law. The stakes are too high. Voluntary guardrails are just barely a step above no regulation. Human rights and AI development is not a zero sum game."

EFA highlights the necessity for a legal framework that encompasses transparency, accountability, and fairness in AI systems. This framework should include creating an Australian AI institute, as stated in Australia's commitment to the Seoul AI Summit Declaration in 2024. It should also mandate risk assessments for AI applications in critical areas such as healthcare, law enforcement, and finance.

The framework proposed by EFA advocates for prohibiting AI use cases that exploit vulnerabilities, infer emotions, engage in biometric categorisation, and practice subliminal manipulation. Additionally, clear accountability mechanisms for developers and deployers of AI technologies should be established, along with strong privacy protections to defend individuals against intrusive data practices.

In the absence of these guardrails, EFA warns that not only are individual rights at risk, but public trust in AI technologies could be significantly undermined. The organisation urges the Australian Government to reinstate its role in AI regulation leadership and set a global precedent by prioritising the safety and rights of its citizens over short-term economic gains.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X