Baby sexual abuse and exploitation characterize probably the most pressing youngster security challenges of our digital age. From grooming and sextortion to the manufacturing and sharing of kid sexual abuse materials (CSAM), these interrelated threats create complicated harms for youngsters worldwide. Behind every occasion of exploitation is a toddler experiencing trauma that may have lasting impacts. As know-how evolves, so do the strategies perpetrators use to use youngsters—creating an surroundings the place safety efforts should continuously adapt and scale.
The problem is immense:
In 2024 alone, greater than 100 information of kid sexual abuse materials had been reported every minute.In 2023, 812 experiences of sexual extortion had been submitted on common per week to NCMEC.NCMEC noticed a 192% improve in on-line enticement experiences between 2023 and 2024.
These numbers present abuse materials represents only one side of a broader panorama of abuse. Kids face grooming, sextortion, deepfakes, and different types of dangerous exploitation. When these threats go undetected, youngsters stay weak to ongoing exploitation, and perpetrators proceed working with impunity.
Technical innovation: A core pillar of Thorn’s technique
Technical innovation represents considered one of Thorn’s 4 pillars of kid security, serving because the technological basis that allows all our youngster safety instruments. By growing cutting-edge options by way of an iterative problem-solution course of, we construct scalable applied sciences that combine with and improve our different strategic pillars:
Our Analysis and Insights give us early visibility into rising threats, so we will quickly present a know-how response.Our Baby Sufferer Identification instruments assist investigators extra shortly discover youngsters who’re being sexually abused, defending youngsters from lively abuse.Our Platform Security options allow tech platforms to detect and forestall exploitation at scale.
This complete method ensures that our technical improvements don’t exist in isolation however work in live performance with our different initiatives to create a strong security internet for youngsters on-line.
A strong instance of our analysis & insights translating into technical innovation is the event of Scene-Delicate Video Hashing (SSVH). Thorn recognized that video-based CSAM was turning into an more and more prevalent and complex type of abuse materials. Current detection instruments primarily targeted on addressing picture materials successfully, representing a important hole within the youngster security ecosystem. In response, our technical innovation crew developed one of many first video hashing and matching algorithms tailor-made particularly for CSAM detection. SSVH makes use of perceptual hashing to determine visually distinctive scenes inside movies, permitting our CSAM Picture Classifier to attain the chance of every scene containing abuse materials. The gathering of CSAM scene hashes make up the video’s hash. This breakthrough know-how has since been deployed by way of our Platform Security instruments since 2020.
The know-how behind youngster safety
As you may think about, the sheer quantity of kid sexual abuse materials and exploitative messages far outweighs what human moderators may ever evaluation. So, how can we remedy this drawback? By growing applied sciences that function a power for good:
Superior CSAM detection techniquesOur machine studying classifiers can discover new and unknown abuse pictures and movies. Our hashing and matching options can discover identified picture and video CSAM. These applied sciences are used to prioritize and triage abuse materials, which might speed up the work to determine youngsters at the moment being abused and fight revictimization.Textual content-based exploitation detectionPast pictures and movies, our know-how identifies textual content conversations associated to CSAM, sextortion, and different sexual harms towards youngsters. Detecting these dangerous conversations creates alternatives for early intervention earlier than exploitation escalates.Rising risk preventionOur technical groups develop forward-looking options to deal with new challenges, together with AI-generated CSAM, evolving grooming techniques, and sextortion schemes that concentrate on youngsters.
What’s a classifier precisely?
Classifiers are algorithms that use machine studying to type knowledge into classes mechanically.
For instance, when an e mail goes to your spam folder, there’s a classifier at work.
It has been educated on knowledge to find out which emails are most definitely to be spam and which aren’t. As it’s fed extra of these emails, and customers proceed to inform it whether it is proper or incorrect, it will get higher and higher at sorting them. The facility these classifiers unlock is the power to label new knowledge by utilizing what it has discovered from historic knowledge — on this case to foretell whether or not new emails are more likely to be spam.
Thorn’s machine studying classification can discover new or unknown CSAM in each pictures and movies, in addition to text-based youngster sexual exploitation (CSE).
These applied sciences are then deployed in our Baby Sufferer Identification and Platform Security instruments to guard youngsters at scale. This makes them a strong piece of the digital security internet that protects youngsters from sexual abuse and exploitation.
Right here’s how totally different companions throughout the kid safety ecosystem use this know-how:
Regulation enforcement can determine victims sooner because the classifier elevates unknown CSAM pictures and movies throughout investigations. Expertise platforms can increase detection capabilities and scale the invention of beforehand unseen or unreported CSAM. They’ll additionally detect textual content conversations that point out suspected imminent or ongoing youngster sexual abuse.
What’s hashing and matching?
Hashing and matching represents probably the most foundational and impactful applied sciences in youngster safety. At its core, hashing converts identified CSAM into a novel digital fingerprint—a string of numbers generated by way of an algorithm. These hashes are then in contrast towards complete databases of identified CSAM with out ever exposing the precise content material to human reviewers. When our techniques detect a match, the dangerous materials may be instantly flagged for removing.
Via our Safer product, we’ve deployed a big database of verified hashes—76.6 million and rising—enabling our clients to solid a large internet for detection. In 2024 alone, we processed over 112.3 billion pictures and movies, serving to clients determine 4,162,256 information of suspected CSAM to take away from circulation.
How does youngster security know-how assist?
New CSAM could depict a toddler who’s actively being abused. Perpetrators groom and sextort youngsters in actual time through dialog. Using classifiers can assist to considerably cut back the time it takes to discover a sufferer and take away them from hurt, and hashing and matching algorithms can be utilized to flag identified materials for removing to forestall revictimization.
Nonetheless, discovering these picture, video and textual content indicators of imminent and ongoing youngster sexual abuse and revictimization typically depends on handbook processes that place the burden on human reviewers or person experiences. To place it in perspective, you would want a crew of lots of of individuals with limitless hours to realize what a classifier can do by way of automation.
Identical to the know-how all of us use, the instruments perpetrators deploy adjustments and evolves. Thorn’s technical innovation is knowledgeable by our analysis and insights, which helps us reply to new and rising threats like grooming, sextortion, and AI-generated CSAM.
A Flickr Success Story
Common picture and video internet hosting web site Flickr makes use of Thorn’s CSAM Classifier to assist their reviewers type by way of the mountain of latest content material that will get uploaded to their web site day by day.
As Flickr’s Belief and Security Supervisor, Jace Pomales, summarized it, “We don’t have 1,000,000 our bodies to throw at this drawback, so having the fitting tooling is de facto necessary to us.”
One current classifier hit led to the invention of two,000 beforehand unknown pictures of CSAM. As soon as reported to NCMEC, regulation enforcement performed an investigation, and a toddler was rescued from lively abuse. That’s the facility of this life-changing know-how.
Expertise should be a power for good if we’re to remain forward of the threats youngsters face in a digital world. Our merchandise embrace cutting-edge know-how to rework how youngsters are shielded from sexual abuse and exploitation. It’s due to our beneficiant supporters and donors that our work is feasible.
In the event you work within the know-how trade and are taken with using Safer and the CSAM Classifier in your on-line platform, please contact data@safer.io. In the event you work in regulation enforcement, you may contact data@thorn.org or fill out this software.



















