Safer Internet Day spotlights AI, trust & child safety
Technology and online safety firms are using Safer Internet Day to warn that rapid advances in artificial intelligence and automated systems are reshaping basic assumptions about trust, identity and protection online.
Executives from DigiCert, Verifymy and G2A.COM say online safety increasingly depends on verified machine identities, stronger safeguards for young users and more rigorous approaches to digital commerce security. Their comments come as policymakers and regulators in Europe and elsewhere consider new rules on AI, content moderation and platform responsibility.
Machine trust
Paul Holt, Vice President for EMEA at digital trust firm DigiCert, observed that the burgeoning volume of machine-to-machine interactions is fundamentally altering the landscape of online safety and digital trust.
"The internet is undergoing a quiet transformation," Holt remarked. "Machines are no longer merely serving human users; they are increasingly communicating with one another, exchanging data and executing decisions at a velocity that exceeds human oversight."
"That shift changes what safety really means online. When machines interact with other machines, trust cannot be assumed; identity has to be verified. Without strong authentication, automated systems can accept instructions, data or access from the wrong source and scale mistakes instantly."
"As a parent, I have learned that safety in the modern world is no longer about watching everything. It is about putting the right systems in place when oversight is no longer possible. Safer Internet Day is a reminder that, in a machine-led internet, trust has to be proven every time or it will fail at scale," Holt said.
Security specialists and infrastructure providers are investing in authentication and encryption as more critical services depend on automated connections between connected devices, cloud platforms and software agents.
AI and young users
Child safety-focused platforms, like Verifymy, say AI has introduced new risks, including synthetic media and non-consensual imagery, while also offering new tools for protection.
"AI is becoming an integral part of young people's digital experiences, shaping the platforms they use, the content they encounter and the ways they interact online," said Andy Lulham, Chief Operating Officer at Verifymy. "Robust guardrails are vital to harness this technology, support positive experiences and prevent it from becoming a Pandora's Box."
"Without appropriate safeguards, AI can put young people at risk, whether through the spread of harmful misinformation or non-consensual intimate imagery and other forms of technology-facilitated sexual abuse, which can be created and amplified at speed."
"Importantly, however, AI can also be a powerful tool to safeguard our children online. From AI-powered content moderation that helps platforms remove harmful content faster and detect risky interactions sooner, to AI face-matching that ensures no one appears in content without completing an ID check, it could play a key role in creating age-appropriate boundaries for children to explore the internet safely." said Lulham.
"As AI evolves, so must the infrastructure that governs its use. The platforms and ecosystems that earn trust in the years ahead will be built around safety and child protection, recognising innovation and safeguarding not as opposing forces but as inseparable responsibilities.
"Initiatives like Safer Internet Day are more important than ever, reminding us of the importance of thoughtful design and the protections that empower children to explore the internet safely."
Commerce and fraud
Digital marketplaces are revisiting their security posture as AI tools lower the barrier for fraud, impersonation and social engineering attacks.
"Trust is the currency of digital commerce. At G2A.COM, safety is not a feature we add on; it is the foundation we build on." said Bartosz Skwarczek, Founder of G2A.COM.
"In an environment where innovation accelerates daily, our responsibility is to stay ahead of risk, not just react to it. We treat security as a core product capability. That means layered defences, advanced threat modelling, rigorous seller verification, secure payment protections, strong account safeguards and a dedicated Trust & Safety function overseeing high-risk activity. Secure-by-design is embedded into our platform architecture, so protection is proactive, continuous and scalable."
"AI is reshaping both opportunity and threat. Deepfakes, synthetic identities and AI-powered social engineering are raising the bar for everyone in our industry. Our approach is simple: assume deception is possible and verify at every step. We invest in AI-driven anomaly detection, stronger verification for sensitive actions, fast impersonation takedowns and ongoing education for both our teams and users. We also collaborate across the ecosystem to strengthen standards around authenticity and accountability online." said Skwarczek.
"Security is a shared responsibility. Technology, policy and user awareness must work together. Enabling multi-factor authentication, protecting credentials, staying within official channels and reporting suspicious activity are small actions that collectively reinforce security."
"Safer Internet Day is a reminder, but for us this commitment is constant. As threats evolve, so will our investment in security, privacy and responsible technology solutions. Our goal is clear: to protect trust at scale and ensure every transaction on G2A.COM is backed by resilience, transparency and leadership," said Bartosz Skwarczek, founder of G2A.COM.
Major consumer and business platforms are investing in fraud detection, identity verification and takedown processes as part of broader risk management programmes, amid rising regulatory focus on digital fraud and consumer protection.