HackerOne launches safe harbour to protect AI testers
HackerOne has launched a Good Faith AI Research Safe Harbor framework that sets out authorisation and legal protections for researchers who test AI systems in good faith.
The company said the framework addresses legal uncertainty around AI testing, which it described as a factor that can slow responsible research and increase risk as AI systems expand across products and services.
The Good Faith AI Research Safe Harbor extends HackerOne's Gold Standard Safe Harbor, which the company introduced in 2022. HackerOne said the earlier framework has seen broad adoption for good-faith security research on conventional software.
New framework
HackerOne positioned the new safe harbour as a shared standard for organisations and researchers that clarifies what it calls responsible AI research. The company said AI testing can produce techniques and outcomes that do not align with existing vulnerability disclosure approaches.
The company said the framework defines "Good Faith AI Research" and sets out what it describes as clear authorisation for responsible AI testing. It also sets expectations for organisations that want independent researchers to scrutinise their AI systems.
Ilona Cohen, Chief Legal and Policy Officer at HackerOne, said the company sees unclear expectations as a barrier to effective AI testing.
"AI testing breaks down when expectations are unclear," said Ilona Cohen, Chief Legal and Policy Officer at HackerOne. "Organisations want their AI systems tested, but researchers need confidence that doing the right thing won't put them at risk. The Good Faith AI Research Safe Harbor provides clear, standardized authorization for AI research, removing uncertainty on both sides."
Commitments
HackerOne said organisations that adopt the framework commit to treating good-faith AI research as authorised activity. The company listed several elements that it said form part of that commitment.
These include refraining from legal action against researchers who operate within the framework. HackerOne also said adopting organisations would provide limited exemptions from restrictive terms of service. It added that organisations would support researchers if third parties pursue claims related to authorised research.
HackerOne said the safe harbour applies only to AI systems owned or controlled by the adopting organisation. The company also said the framework is designed around responsible disclosure and collaboration between researchers and organisations.
Market context
AI security testing has become a growing focus for organisations deploying generative AI features and agentic systems, alongside established approaches such as bug bounties and vulnerability disclosure. HackerOne said that the AI research safe harbour sits alongside its existing safe harbour model for traditional software, which it describes as a way of protecting good-faith research activity.
Kara Sprague, CEO of HackerOne, linked the framework to confidence in AI systems under real-world testing conditions.
"AI security is ultimately about trust," said Kara Sprague, CEO of HackerOne. "If AI systems aren't tested under real-world conditions, trust erodes quickly. By extending safe harbor protections to AI research, HackerOne is defining how responsible testing should work in the AI era. This is how organizations find problems earlier, work productively with researchers, and deploy AI with confidence."
Availability
HackerOne said the Good Faith AI Research Safe Harbor is available to its customers as a standalone framework. It said organisations can adopt it alongside the Gold Standard Safe Harbor.
The company said programmes that adopt the framework can signal to researchers that AI testing is welcome and authorised. HackerOne added that the framework signals that such work is protected under the terms set out by the adopting organisation.
HackerOne said it expects the framework to shape how organisations set authorisation and expectations for AI testing as deployments broaden across products and services.