How artificial intelligence slop is reshaping internet culture

A wave of short, surreal artificial intelligence videos is flooding social feeds, sparking backlash, new creative communities, and a rethink of what counts as art online.

The article explores the rise of “artificial intelligence slop,” a term used online for repetitive, surreal, and often lowbrow artificial intelligence generated content that now saturates platforms like TikTok and Instagram. These clips frequently adopt a fake-surveillance aesthetic or lean into impossible physics and absurd mashups, powered by text-to-video tools such as OpenAI’s Sora, Google’s Veo series, Runway, and platforms like OpenArt. Early text-to-video systems from around 2022 to 2023 produced only a few seconds of blurry and glitchy footage, but newer models like Sora2, Veo 3.1, and Runway’s Gen-4.5 generate more realistic, longer videos that can last up to a minute and sometimes include sound. While these tools were pitched as the future of cinema, their real impact is on the “six-inch screen,” where both professional creators and ordinary users churn out endlessly riffable trends such as rabbits bouncing on a trampoline or Indian prime minister Narendra Modi dancing with Gandhi.

The piece profiles several creators who embrace artificial intelligence slop as a medium for experimentation and storytelling rather than just cheap spectacle. Architecture designer turned artist Wenhui Lim leans into surreal auntie-centric worlds via her Niceaunties account, including a viral Auntlantis video that has racked up 13.5 million views on Instagram. Software developer Drake Garibay creates body horror hybrids and a viral TikTok clip captioned “Cooking up some fresh AI slop” that has racked up more than 8.3 million views. Digital artist Daryl Anselmo has posted an artificial intelligence generated video every day since 2021 and compiled them into a gallery-exhibited project titled AI Slop, which includes pieces like feel the agi and a midnight diner vignette called Tot and Bothered. Other projects such as Granny Spills, which gained 1.8 million Instagram followers within three months, show how artificial intelligence workflows enable recurring characters, crossovers, and franchise-like universes, while viral formats like “Italian brainrot” demonstrate how collaborative lore-building can flourish when tools like OpenArt lower the barrier for non-artists, more than 80% of whom its founders say have no artistic background.

Alongside creative play, the article details darker and more contentious sides of artificial intelligence slop. The same systems power racist deepfakes of Martin Luther King Jr., violent clips of women being strangled, and “nazislop” that repackages fascist imagery for teen feeds, fueling concerns about harm and manipulation. The term “slop,” which originated on 4chan and now broadly denotes low-quality mass content, has become shorthand for dismissing artificial intelligence generated work, a stigma many creators resent even as some reclaim it semi-ironically. Creators describe multi-hour, iterative workflows that complicate the notion that artificial intelligence art is effortless, while critics and freelancers worry about economic displacement, including a Brookings study showing that after generative artificial intelligence tools launched in 2022, freelancers in artificial intelligence exposed occupations saw about 2% decline in contracts and a 5% drop in earnings. Scholars like Mindy Seu and Zach Lieberman situate artificial intelligence within a long history of new media facing skepticism from cultural institutions, even as they acknowledge that black-box models can erode direct artistic control. Ultimately, the article suggests that artificial intelligence slop embodies both a submission to algorithmic logic and a new form of democratized, participatory culture, where users remix, parody, and inhabit the very aesthetics they claim to hate, and where human impulses to imitate, joke, and build shared worlds persist despite, and through, artificial intelligence.

52

Impact Score

Judge blocks Pentagon move against Anthropic

A federal judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk after finding major gaps between public threats, legal authority, and the government’s courtroom arguments. The dispute has become a test of how far the government can go in punishing an Artificial Intelligence company over political and contractual conflict.

Anumana wins FDA clearance for pulmonary hypertension ECG Artificial Intelligence tool

Anumana has received FDA 510(k) clearance for an Artificial Intelligence-enabled pulmonary hypertension algorithm designed for use with standard 12-lead electrocardiograms. The company says the software can help clinicians spot early signs of disease within existing workflows and without moving patient data outside the health system environment.

Anu Bradford on tech sovereignty and regulatory fragmentation

Anu Bradford argues that Europe is wavering in its role as the world’s digital rule-setter just as governments everywhere move toward more state control over technology. Global companies are being pushed to treat geopolitical risk, data sovereignty, and Artificial Intelligence governance as core strategic issues.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.