CurricuLLM launches Student Safety Centre for schools
Mon, 11th May 2026 (Today)
CurricuLLM has launched a Student Safety Centre for schools in Australia and New Zealand. The system identifies 21 categories of harm in student interactions with artificial intelligence.
Aimed at K-12 education, the product is designed to detect risks that arise when pupils use AI in classroom and homework settings. Most existing AI safety tools were developed for adults using general-purpose chatbots rather than for school environments.
The Sydney-based company says the system monitors every student interaction and sorts signals into severity tiers. Depending on the level, an issue may be logged, flagged for review, or escalated to a designated member of staff.
The listed categories include academic offloading, where a student tries to get the system to complete work for them; attachment forming, where use patterns suggest emotional reliance on the AI; and personal revelations, where a pupil discloses information about their wellbeing or home life that a trusted adult should see.
The broader taxonomy covers academic integrity, safeguarding, social and emotional wellbeing, and developmentally inappropriate content. CurricuLLM says the centre was designed to reflect how students actually use AI in schools while keeping ordinary learning conversations private.
The system follows a principle of minimum necessary visibility. Under that approach, teachers and wellbeing staff can view the information they need to act, while routine exchanges between students and the platform are not surfaced.
Dan Hart, founder and chief executive officer of CurricuLLM, said schools had been asking for this type of oversight for some time.
"Schools have told us the same thing for years. They want students using AI, but they need to know when something is going wrong. Generic content filters miss almost everything that actually matters in a school setting. The Student Safety Centre was built for the patterns that show up in real student use, not the ones a corporate trust and safety team would worry about," Hart said.
School focus
CurricuLLM says it developed the system against the Australian Curriculum, the New Zealand Curriculum, and the safeguarding frameworks already used by schools in both countries. The product is positioned as an addition to existing pastoral care structures rather than a substitute for them.
That approach reflects a broader debate in schools over how to introduce generative AI without weakening academic integrity or missing signs of student distress. While many teachers have adopted AI tools for lesson preparation and tutoring, concerns remain about plagiarism, unsupervised disclosure of personal information, and the risk of students forming unhealthy relationships with conversational systems.
Products aimed at young users have faced growing scrutiny over how they manage those risks. In school settings, that scrutiny often extends beyond harmful content to behavioural signals and welfare issues that standard moderation systems built for consumer chatbots may miss.
CurricuLLM says two independent assessments were recently completed alongside the launch. One was an Apgard YouthSafe AI certification following an audit covering content risks, behavioural risks, and red-team testing for youth-focused AI products.
The other was an assessment under the Safer Technologies 4 Schools framework run by Education Services Australia. The framework is widely used to assess privacy and security in digital products for K-12 education across Australia and New Zealand.
The company also says its system aligns with the six principles of the Australian Framework for Generative AI in Schools. Those principles are intended to guide the use of generative AI in ways that support teaching and learning while addressing safety, fairness, and accountability.
Pilot use
According to CurricuLLM, the Student Safety Centre is already live in pilot deployments in a number of independent schools. It did not name the schools involved.
Alongside the safety announcement, the company said its tutoring platform is benchmarked at 89 per cent accuracy on Australian and New Zealand curriculum content, compared with about 41 per cent for general-purpose frontier models. The claim supports its broader effort to distinguish a school-specific system from mainstream AI tools not built around local curricula or school safeguarding practices.
The launch adds to a growing market for education-focused AI products in Australia and New Zealand, where schools are under pressure to balance experimentation with tighter oversight. For suppliers in that market, the challenge is no longer only whether AI can help students learn, but whether schools can monitor misuse and signs of harm without turning every classroom interaction into a surveillance exercise.
CurricuLLM says the centre is intended to give staff enough information to intervene when needed while leaving routine study conversations alone.