Hundreds of YouTube channels are leveraging generative Artificial Intelligence tools to churn out disturbing animated videos targeting children, according to an investigation by WIRED. These videos, which often masquerade as innocent kids’ entertainment, depict cartoon characters such as minions, cat avatars, and popular children’s icons engaged in violent, sexually suggestive, or abusive scenarios. The tactics recall the notorious ‘Elsagate’ scandal from 2017, which saw characters like Elsa from ‘Frozen’ and Spider-Man featured in perilous and inappropriate situations—content that managed to evade detection and infiltrate YouTube Kids by manipulating platform algorithms.
The current wave, enabled by the ease and speed of Artificial Intelligence-generated animation, has made it trivial for bad actors to mass-produce and monetize low-quality content at scale. Channel names and video metadata often utilize popular search terms like ‘minions,’ ‘cute cats,’ and ‘Disney,’ ensuring the videos are recommended to unsuspecting viewers. Many cat-themed channels, for example, display themes of abuse, starvation, and assault within storylines styled as fables, only to resolve with temporary or insincere redemption for abusers. Nonprofit organization Common Sense Media, after reviewing the content exposed by WIRED, cited recurring portrayals of child abuse, torture, and extreme peril, raising serious red flags about the psychological harms such videos could inflict.
YouTube, when contacted, acknowledged the severity of the issue, stating it had terminated two flagged channels, suspended monetization from three others, and removed offending videos for breaching Child Safety policies. However, enforcement remains challenging as new channels frequently emerge to replace those taken down, sometimes reposting similar content verbatim. YouTube emphasized its requirement for creators to label Artificial Intelligence-generated material and highlighted ongoing efforts to promote higher-quality, expert-reviewed material for children. Despite these measures and the introduction of stricter content labeling and moderation since 2019, harmful animated material persists, often circumventing detection and extending its reach via trends, metadata manipulation, and migration to other platforms like TikTok.
As experts and child advocacy organizations push for legislative safeguards and improved oversight, the speed and volume of Artificial Intelligence-powered animation continue to complicate moderation. The investigation demonstrates that without comprehensive, adaptive policy changes and coordinated stakeholder action, children remain at risk from the evolving ecosystem of malicious or exploitative automated content online.