HackerOne launches a guideline that legally protects AI research with good intentions, so that organizations and researchers can work together more safely to test AI systems.
HackerOne has launched the Good Faith AI Research Safe Harbor. This is a guideline that offers legal protection to those who test AI systems with good intentions. The announcement is intended to eliminate current ambiguities surrounding AI research. Many AI tests fall outside traditional vulnerability reporting frameworks, which creates risks for researchers. This new framework should solve that problem.
On top of existing framework
The guideline builds on HackerOne’s Gold Standard Safe Harbor from 2022, which previously offered protection for classic software research. Both guidelines determine how organizations can explicitly allow and protect researchers when identifying vulnerabilities.
When organizations embrace the framework, they commit to not taking legal action against researchers who test AI systems in good faith. They also provide exceptions to restrictive terms of use and offer support when third parties file a claim. The protection only applies to AI systems that the organization itself manages or owns.
Standard for safe AI tests
According to HackerOne, clear communication is essential to ensure the safety of AI systems. Organizations want their AI to be tested, but researchers need to be sure that they will not get into legal trouble as a result. The new framework should bridge that gap.
The guideline is available to HackerOne customers as a separate framework, separate from the existing Gold Standard. Participating organizations indicate that AI research is welcome, which should improve the quality of testing and collaboration.
