Homeland Security tests Artificial Intelligence to spot synthetic child abuse images

The Department of Homeland Security’s Cyber Crimes Center awarded a contract to Hive AI to test detection algorithms that distinguish Artificial Intelligence generated child sexual abuse material from content depicting real victims. The three month trial aims to help investigators prioritize cases amid a surge in generative content.

Generative Artificial Intelligence has enabled the production of child sexual abuse images to skyrocket. The Department of Homeland Security’s Cyber Crimes Center, which leads cross border investigations into child exploitation, is now testing whether Artificial Intelligence can help distinguish synthetic images from material depicting real victims, according to a new government filing. The center has awarded a contract to San Francisco based Hive AI for software that assesses whether content was generated by Artificial Intelligence. The filing, posted on September 19, is heavily redacted, and Hive cofounder and CEO Kevin Guo declined to discuss specifics. He confirmed, however, that the engagement involves applying the company’s detection algorithms to child sexual abuse material.

The filing cites data from the National Center for Missing and Exploited Children showing a 1,325 percent increase in incidents involving generative Artificial Intelligence in 2024. With investigators’ first priority to find and stop ongoing abuse, the flood of Artificial Intelligence generated content has blurred the line between synthetic material and cases with real victims who may be at risk. A capability that reliably flags real victims would help teams triage workloads. As the filing states, identifying Artificial Intelligence generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals.”

Hive AI markets creation tools for images and video alongside content moderation services that can flag violence, spam, sexual material, and even identify celebrities. In December, MIT Technology Review reported that the company was selling deepfake detection technology to the US military. For combating child sexual abuse material, Hive and child safety nonprofit Thorn offer a hashing tool that assigns unique IDs to known illegal content so platforms can block uploads, a standard line of defense. Those systems do not determine whether a file was produced by Artificial Intelligence. Hive has built a separate detector for that purpose. Guo says it is not trained specifically on CSAM because pattern signals in pixel combinations make the approach generalizable, and the company benchmarks it per use case.

The government justified awarding the trial without a competitive bidding process, referencing a 2024 University of Chicago study that ranked Hive’s detector ahead of four others on Artificial Intelligence generated art, as well as the firm’s Pentagon contract for identifying deepfakes. The pilot will run for three months. The National Center for Missing and Exploited Children did not respond to requests for comment on the effectiveness of such detection models in time for publication.

68

Impact Score

Generative Artificial Intelligence in travel

PhocusWire’s hub compiles in-depth reporting on generative Artificial Intelligence across the travel industry, from customer service and marketing to product development. The page highlights new tools, research and leaders shaping automation, personalization and decision-making.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.