Generative AI is already shaping how we create, share, and eat content material on-line.
These instruments can produce new photographs, movies, textual content, and audio in seconds, and infrequently with only a single immediate.
Whereas this expertise unlocks thrilling potentialities, it’s additionally opening the door to pressing, unprecedented dangers to kids’s security.
At Thorn, we’re already seeing the methods generative AI is being misused to use and abuse kids. However we additionally know we’re in a essential window to behave. If the entire ecosystem – together with policymakers, platforms, baby safety organizations, and others – acts now, we’ve got the possibility to form this expertise earlier than these harms turn out to be much more widespread.
Right here’s what we see occurring as we speak, and what should occur subsequent.
Why hyper-realistic, immediately generated AI imagery is an pressing danger for kids
Synthetic intelligence isn’t new.
In actual fact, at Thorn, we’ve been leveraging synthetic intelligence and machine studying to combat baby sexual abuse and exploitation for over a decade. Our instruments use predictive AI to detect baby sexual abuse and exploitation at scale. This helps investigators determine extra baby victims of abuse sooner, and disrupts the unfold of kid sexual abuse on tech platforms.
However what’s new and profoundly totally different is the explosion of easy-to-use generative AI instruments, able to creating hyper-realistic artificial content material. Abruptly, anybody, wherever can exploit kids with just some clicks.
The expertise itself isn’t new. What’s new is how accessible and widespread it has turn out to be—and the way photorealistic artificial photographs now are, making it more durable than ever to tell apart AI-generated visuals from actual ones. This speedy evolution, together with the pace and scale at which hurt can unfold, poses vital challenges for shielding kids.
How generative AI is already being misused to sexually exploit kids
Deepfake nudes and AI-generated CSAM
Perpetrators are more and more utilizing generative AI to create sexually express artificial photographs of actual kids, generally known as AI-generated baby sexual abuse materials (AIG-CSAM). This contains each totally fabricated photographs and “deepfake nudes”: actual pictures of youngsters digitally altered to depict them in sexually express methods—with out their information or consent.
These violations are usually not hypothetical. They’re already occurring—and in lots of circumstances, the perpetrators aren’t strangers, however friends.
Nudifiers – and why they’re an issueAI-powered “nudify” instruments and picture turbines are broadly accessible on-line and permit customers to digitally undress or sexualize actual pictures, usually in seconds. These instruments are being marketed broadly: in 2024, advertisements for nudifiers even appeared on mainstream platforms, which confronted public backlash over their position in spreading these instruments by way of search outcomes and advert placements.
Peer misuse and school-based harmsYoungsters themselves are more and more misusing nudify apps to focus on their classmates. These photographs usually start as harmless college portraits or social media pictures, then get altered with AI instruments to point out youngsters in express methods. It’s not simply theoretical—that is already occurring in faculties throughout the nation. In a single Thorn examine, 1 in 10 minors mentioned they personally know somebody who has used AI instruments to generate nude photographs of different youngsters.
The results are extreme. The content material could also be faux, however the trauma is actual. Victims expertise deep emotional hurt, together with anxiousness, social isolation, bullying, and long-term reputational harm. In some circumstances, faculties have needed to contain legislation enforcement or take disciplinary motion, whereas additionally grappling with the way to create insurance policies and education schemes that may sustain with quickly evolving expertise.
A disaster of scale and realismDeepfake nudes are particularly harmful as a result of they seem disturbingly actual—blurring the road between artificial and genuine abuse. Whether or not the picture was generated by a digital camera or a pc, the psychological toll on victims is usually the identical.
And as these instruments turn out to be extra sensible and extra accessible, present baby safety methods danger changing into overwhelmed. Investigators already face a needle-in-a-haystack drawback when making an attempt to determine kids in lively hurt. The inflow of AI-generated abuse content material solely will increase that haystack—clogging forensic workflows and making it more durable for legislation enforcement to triage circumstances, prioritize actual victims, and take away them from hurt as rapidly as doable. AIG-CSAM doesn’t simply create new hurt; it makes it more durable to detect and reply to present hurt.
AI-enabled sextortionWe’ve additionally seen AI-generated nudes utilized in sextortion scams. Offenders might create a faux nude of a kid, then use it to threaten or extort them for extra express content material or cash. Even when the picture was synthetically produced, the concern, disgrace, and manipulation inflicted on the sufferer are very actual.
Thorn’s method to tackling the kid security dangers of generative AI
Generative AI is ushering in new types of sexual abuse- and revictimization- at an alarming tempo.
It is a risk that’s occurring proper now. And as AI capabilities advance, we danger falling additional behind until we act.
At Thorn, we imagine it’s doable to construct AI methods with safeguards in place from the beginning. That’s why we’re working immediately with tech firms to embed Security by Design ideas into the event and deployment of generative AI methods. Security must be a foundation- not an afterthought.
We’re additionally advocating for coverage efforts to make sure that AI-generated baby sexual abuse is each acknowledged underneath the legislation as unlawful, and proactively addressed earlier than it spreads. On the federal degree, present statutes cowl a lot of this exercise, however gaps stay. On the state degree, extra legislative readability is usually wanted.
Creating or sharing AI-generated baby sexual abuse materials (AIG-CSAM) is against the law underneath federal U.S. legislation, which prohibits obscene content material involving minors—even when computer-generated. Whereas many states are nonetheless updating their legal guidelines to explicitly tackle AI-generated intimate photographs, arrests have already been made in circumstances involving the distribution of deepfake nudes of highschool college students. In most jurisdictions, sharing or producing these photographs—particularly of minors—can result in legal costs, together with for teenagers who misuse these instruments towards their friends.
For a real-world instance, see this NYT article on AI-generated baby sexual abuse and authorized gaps.
Most significantly, we’re serving to individuals perceive that AI-generated CSAM is just not “faux” abuse. It causes actual hurt to actual kids, and it’ll take collective motion to maintain them secure.
What you are able to do now
In the event you’re a father or mother or caregiver:Begin early, keep open, and preserve speaking. Judgment-free, ongoing conversations assist youngsters really feel secure coming to you when one thing doesn’t really feel proper—particularly in a digital world that’s evolving sooner than any of us can sustain with. Ask questions, pay attention carefully, and allow them to know they will all the time flip to you, it doesn’t matter what.
In the event you’re uncertain the way to start, our free Navigating Deepfake Nudes Information provides expert-backed scripts and sensible steps to navigate these conversations with confidence.
In the event you work at an organization constructing or deploying generative AI:You have got the ability and the accountability to assist stop hurt earlier than it occurs. Decide to constructing with Security by Design in thoughts. Consider how your instruments could possibly be misused to generate dangerous or abusive content material, and take motion. Be taught extra about Thorn’s Security by Design challenge right here.


















