YouTube Struggles With Surge of Artificial Intelligence-Generated Cartoon Gore Targeting Kids

Dozens of YouTube channels are using Artificial Intelligence to spread disturbing cartoon content, reviving ´Elsagate´ fears despite new moderation rules.

Hundreds of YouTube channels are leveraging generative Artificial Intelligence tools to churn out disturbing animated videos targeting children, according to an investigation by WIRED. These videos, which often masquerade as innocent kids’ entertainment, depict cartoon characters such as minions, cat avatars, and popular children’s icons engaged in violent, sexually suggestive, or abusive scenarios. The tactics recall the notorious ‘Elsagate’ scandal from 2017, which saw characters like Elsa from ‘Frozen’ and Spider-Man featured in perilous and inappropriate situations—content that managed to evade detection and infiltrate YouTube Kids by manipulating platform algorithms.

The current wave, enabled by the ease and speed of Artificial Intelligence-generated animation, has made it trivial for bad actors to mass-produce and monetize low-quality content at scale. Channel names and video metadata often utilize popular search terms like ‘minions,’ ‘cute cats,’ and ‘Disney,’ ensuring the videos are recommended to unsuspecting viewers. Many cat-themed channels, for example, display themes of abuse, starvation, and assault within storylines styled as fables, only to resolve with temporary or insincere redemption for abusers. Nonprofit organization Common Sense Media, after reviewing the content exposed by WIRED, cited recurring portrayals of child abuse, torture, and extreme peril, raising serious red flags about the psychological harms such videos could inflict.

YouTube, when contacted, acknowledged the severity of the issue, stating it had terminated two flagged channels, suspended monetization from three others, and removed offending videos for breaching Child Safety policies. However, enforcement remains challenging as new channels frequently emerge to replace those taken down, sometimes reposting similar content verbatim. YouTube emphasized its requirement for creators to label Artificial Intelligence-generated material and highlighted ongoing efforts to promote higher-quality, expert-reviewed material for children. Despite these measures and the introduction of stricter content labeling and moderation since 2019, harmful animated material persists, often circumventing detection and extending its reach via trends, metadata manipulation, and migration to other platforms like TikTok.

As experts and child advocacy organizations push for legislative safeguards and improved oversight, the speed and volume of Artificial Intelligence-powered animation continue to complicate moderation. The investigation demonstrates that without comprehensive, adaptive policy changes and coordinated stakeholder action, children remain at risk from the evolving ecosystem of malicious or exploitative automated content online.

82

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.