Magentus first to sign national AI health software code
Magentus has become the first company to sign Australia's inaugural Voluntary Code of Conduct on the use of artificial intelligence in health software.
The principles-based framework was developed by the Medical Software Industry Association and the Medical Technology Association of Australia. It sets governance expectations for AI used in healthcare settings and includes a companion Accreditation Standard.
Governance baseline
The code covers 10 governance domains, including accountability, risk management, data governance, transparency, testing and monitoring, and people's rights to challenge AI-driven decisions.
Signatories commit to clear organisational ownership of AI systems, clinical validation before deployment, and ongoing monitoring after deployment. The framework references privacy law compliance and requires transparency for end users about how AI systems operate.
The framework aligns with the Australian Government's National AI Plan. A formal review of the code is scheduled for mid-2026.
Magentus provides clinical systems and practice management software used by medical professionals across Australia. Its portfolio includes tools for specialist practices, oncology workflows, radiology departments, and pathology networks.
Chief Technology Officer Brenden Conolly said signing the code was a straightforward decision.
"Patients, clinicians, and governments need to know that AI is being used carefully and transparently. This is about committing to exactly that, and earning their trust."
Administrative focus
Magentus said its AI use will operate within defined guardrails. It will not make clinical decisions, and clinicians will remain in control.
The company also outlined requirements for explainability and transparency of AI outputs. It said it will test and validate each feature in real-world conditions and treat privacy protections as non-negotiable.
In its cloud-based practice management platform Gentu, Magentus said AI could focus on administrative tasks, such as flagging incomplete referrals before they cause delays and catching missing information that could hold up billing. It also cited using system data to anticipate busy periods and support planning.
The company pointed to broader operational pressures, including growing volumes of information and administrative workload. It said AI could reduce duplicate testing, detect data mismatches, speed information flow, and reduce manual processes in clinical operations.
"AI isn't about replacing expertise, it's about removing the noise around it," Conolly said. "A well-designed system gives clinicians back time for direct patient care."
Outside regulation
Industry groups described the code as a response to a governance gap for health software that sits outside the Therapeutic Goods Administration's regulatory remit. This category can include practice management platforms, health informatics tools, and clinical workflow systems.
Without a clear baseline, health services and suppliers have faced uneven expectations for how AI should be assessed and monitored across products that influence patient pathways and operational decisions. The new code sets out a common set of principles for organisations that develop or deploy these systems.
Magentus said it works with government agencies, health networks, and medical colleges on product development and deployment. The Medical Software Industry Association is running stakeholder webinars to support engagement with the code across the sector.
Michele Blanshard, Managing Director of Practise Management & Oncology at Magentus and Vice-President of the Medical Software Industry Association, linked AI adoption to stronger governance and transparency expectations.
"We see responsible AI use as increasingly integral to providing safe, modern healthcare," Blanshard said. "We want to set the benchmark from the very beginning."
The mid-2026 review will assess whether the standards remain fit for purpose as AI develops and as more organisations sign the code.