At Thorn, we consider know-how can and should defend kids, not put them in danger. That’s why we’re proud to announce a brand new milestone in our world little one safety work. We’ve co-authored a security protocol for AI-generated little one sexual exploitation and abuse (CSEA) with the UK AI Safety Institute (AISI).
This security protocol creates a framework to stop AI-generated CSAM. It considers all the AI lifecycle, from mannequin coaching and deployment to content material internet hosting and moderation. It is a roadmap for builders, mannequin internet hosting providers, and others to construct AI programs with little one security at their core.
Collaborations like this characterize a major step ahead in Thorn’s Security by Design initiative, first launched with All Tech Is Human, to information moral AI growth. The brand new security protocol expands that basis, aligning with ongoing standardization efforts. It brings Thorn’s experience into direct partnership with the UK AI Safety Institute, which is the world’s largest state-backed organisation of its form.
Why this issues
The misuse of AI to additional little one sexual exploitation and abuse is an rising and critical menace. With out clear safeguards, these applied sciences may be exploited to create abuse materials. By establishing prevention frameworks now, this security protocol helps cease these harms at scale.
How this matches into Thorn’s broader technique
This work is one a part of Thorn’s complete AI security technique, constructed round three pillars:
Transparency and enchancment. Sharing transparency studies that elevate firms’ commitments and encourage shared duty.Requirements and scalability. Collaborating with world standard-setting our bodies to make security interventions measurable and auditable.Coverage and partnership. Partaking with policymakers to make sure rising laws is each possible and impactful.
Every of those pillars helps make security a default in each layer of our digital world.
Trying forward
Technical requirements are among the many strongest instruments we’ve got to maintain kids secure. They guarantee constant practices throughout industries, allow third-party audits, and create shared accountability. With this security protocol, Thorn is rising the worldwide attain of the present Security by Design rules and mitigations whereas including new practices to deal with the evolving nature of generative AI-enabled sexual harms in opposition to kids.
This new security protocol represents an impactful step towards world duty of AI programs. It strengthens the security infrastructure to stop know-how from harming kids and reaffirms Thorn’s position as a trusted voice in know-how, coverage, and little one safety.
With the UK AISI, we’re proving that proactive partnership might help make the world safer for each little one.



















