YouTube Struggles With Surge of Artificial Intelligence-Generated Cartoon Gore Targeting Kids

Dozens of YouTube channels are using Artificial Intelligence to spread disturbing cartoon content, reviving ´Elsagate´ fears despite new moderation rules.

Hundreds of YouTube channels are leveraging generative Artificial Intelligence tools to churn out disturbing animated videos targeting children, according to an investigation by WIRED. These videos, which often masquerade as innocent kids’ entertainment, depict cartoon characters such as minions, cat avatars, and popular children’s icons engaged in violent, sexually suggestive, or abusive scenarios. The tactics recall the notorious ‘Elsagate’ scandal from 2017, which saw characters like Elsa from ‘Frozen’ and Spider-Man featured in perilous and inappropriate situations—content that managed to evade detection and infiltrate YouTube Kids by manipulating platform algorithms.

The current wave, enabled by the ease and speed of Artificial Intelligence-generated animation, has made it trivial for bad actors to mass-produce and monetize low-quality content at scale. Channel names and video metadata often utilize popular search terms like ‘minions,’ ‘cute cats,’ and ‘Disney,’ ensuring the videos are recommended to unsuspecting viewers. Many cat-themed channels, for example, display themes of abuse, starvation, and assault within storylines styled as fables, only to resolve with temporary or insincere redemption for abusers. Nonprofit organization Common Sense Media, after reviewing the content exposed by WIRED, cited recurring portrayals of child abuse, torture, and extreme peril, raising serious red flags about the psychological harms such videos could inflict.

YouTube, when contacted, acknowledged the severity of the issue, stating it had terminated two flagged channels, suspended monetization from three others, and removed offending videos for breaching Child Safety policies. However, enforcement remains challenging as new channels frequently emerge to replace those taken down, sometimes reposting similar content verbatim. YouTube emphasized its requirement for creators to label Artificial Intelligence-generated material and highlighted ongoing efforts to promote higher-quality, expert-reviewed material for children. Despite these measures and the introduction of stricter content labeling and moderation since 2019, harmful animated material persists, often circumventing detection and extending its reach via trends, metadata manipulation, and migration to other platforms like TikTok.

As experts and child advocacy organizations push for legislative safeguards and improved oversight, the speed and volume of Artificial Intelligence-powered animation continue to complicate moderation. The investigation demonstrates that without comprehensive, adaptive policy changes and coordinated stakeholder action, children remain at risk from the evolving ecosystem of malicious or exploitative automated content online.

82

Impact Score

Firefox 148 adds artificial intelligence killswitch after user backlash

Mozilla is adding a persistent artificial intelligence killswitch to Firefox 148 after strong community backlash against plans for an artificial intelligence first browser experience. Users will be able to disable individual artificial intelligence features or shut them all off with a single control.

Western Digital unveils high bandwidth hard drives with 4x I/O performance

Western Digital is introducing new high bandwidth hard drives that combine multi-head read and write techniques with a dual actuator design to significantly boost I/O performance while preserving capacity. The roadmap targets up to 100 TB HDDs with throughput that aims to rival traditional QLC SSDs on price and density.

Nvidia and Dassault deepen partnership to build industrial virtual twins

Nvidia and Dassault Systèmes are expanding their long-running partnership to build shared industrial Artificial Intelligence world models that merge physics-based virtual twins with accelerated computing. The companies aim to shift engineering, manufacturing and scientific work into real-time, simulation-driven workflows powered by Artificial Intelligence companions.

Moltbot and the case for human agency as the core Artificial Intelligence guardrail

Moltbot’s viral rise highlights both the appeal of deeply personalized Artificial Intelligence agents and the rising need for users to assert their own agency, security practices, and governance. Human decision making and responsibility emerge as the decisive safeguard as open source agentic Artificial Intelligence systems gain system level powers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.