As child abuse materials have become an increasing issue for social media platforms, one organization is using artificial intelligence to mitigate that risk. Thorn is an El Segundo-based technology nonprofit that utilizes technology to prevent the human trafficking and sexual exploitation of children.
Thorn recently launched an AI-based platform called Safer Predict. The platform aims to help content-hosting platforms (like social media or video streaming websites) detect if their content contains child sexual abuse or may lead to exploitation and grooming.
Thorn was founded in 2012 by actors Demi Moore and Ashton Kutcher. The nonprofit has long been entrenched in the technological aspect of child danger. The Child Crime Prevention and Safety Center estimates around 90% of sexual advances at children happen online.
Before its formal launch, Thorn partnered with social media platform X (formerly known as Twitter) to beta test the texting offering, which integrates into X’s content moderation workflow. The machine learning platform is trained on child sexual abuse data, which aims to predict if an online conversation with a minor is going down a dangerous path.
“Child safety risks are skyrocketing, and platforms need solutions that can effectively scale protection for their users,” said Julie Cordua, chief executive of Thorn. She said that Safer Predict “gives platforms the power of Thorn’s cutting-edge child safety technology to identify new or previously unreported (child sexual abuse materials) and (child sexual exploitation) across images, video and text. This allows them to take swift action, remove harmful content, and create a safer digital environment for everyone.”
Thorn worked with think tank All Tech Is Human in April to create a slew of generative AI principles regarding children, such as guarding against users creating AI-generated images and videos of child sexual abuse. Huge players in the AI space like Anthropic, Open AI, Google, Meta and Microsoft committed to these principles.