Elon Musk’s grok imagine creates deepfake nudes of celebrities

Elon Musk’s Grok Imagine has users generating photorealistic images and short videos that expose gaps in moderation for celebrity content; the controversy raises fresh questions about Artificial Intelligence governance.

Grok Imagine, a video generation feature promoted by Elon Musk, launched this week for Apple users and quickly went viral, producing tens of millions of images in a short span. Musk said more than 34M images were generated in 48 hours and the feature is available through a SuperGrok subscription. Users create a still image from a text prompt and then convert it to a short clip using four presets: ´Custom´, ´Normal´, ´Fun´ and ´Spicy´. The rollout has been accompanied by Musk posting generated videos on X, and by promises that quality will keep improving as the feature expands to Android and more users.

´Spicy´ mode has become the flashpoint. Unlike other generative tools that block celebrity text prompts, Grok Imagine allowed prompts that produced sexualized video content featuring recognisable public figures. The verge reported a nude video of Taylor Swift produced from the prompt ´Taylor Swift celebrating Coachella with the boys´, and Deadline ran its own tests. A prompt such as ´Scarlett Johansson walks the red carpet´ yielded photorealistic stills, some with clear awards-event markers, and when turned into video the results were mixed: some attempts returned a ´Video moderated´ message, while others produced suggestive clips including images of a star lifting a dress to reveal underwear. The app also generated explicit or topless outputs for other names mentioned in the piece, including Sydney Sweeney, Jenna Ortega, Nicole Kidman, Kristen Bell, Timothée Chalamet and Nicolas Cage.

The development lands amid industry and legal tensions. Scarlett Johansson has previously warned about misuse of artificial intelligence and threatened legal action over voice cloning. The article notes the recently signed Take It Down Act, which adds criminal penalties for distributing non-consensual intimate imagery, and also cites reporting from gizmodo that observed a pattern of female celebrities being targeted. Grok has not responded to Deadline´s request for comment.

The core issue is moderation and policy: Grok Imagine´s technical capability to generate rapid video content is clear, but its safeguards are inconsistent. Some requests are blocked, others are not. That gap leaves room for exploitation of likenesses, fuels debate about consent and enforcement, and underscores a larger conversation about how platforms and creators of generative technology will balance rapid feature development with legal and ethical obligations.

86

Impact Score

MEPs back delay for parts of Artificial Intelligence Act

European Parliament committees have endorsed targeted delays to parts of the Artificial Intelligence Act while adding a proposed ban on certain non-consensual image manipulation tools. The changes aim to give companies clearer deadlines, reduce overlap with other EU rules, and extend support to small mid-cap enterprises.

Publisher alliance seeks leverage over Artificial Intelligence web access

A new publisher coalition is trying to reshape how Artificial Intelligence companies access journalism by combining collective bargaining with tougher technical controls. The effort reflects growing pressure on Artificial Intelligence firms to pay for content used in training, search, and user-facing responses.

Military advantage in the age of algorithmic diffusion

American leadership in Artificial Intelligence research and infrastructure may not translate into lasting military advantage. Rapid diffusion of algorithms is shifting the contest toward compute, talent, and the speed of military adoption.

Artificial Intelligence adoption rises among small businesses

Small businesses are increasingly using Artificial Intelligence and reporting strong gains in efficiency, productivity, and expected revenue. Many still face practical barriers and want more training, resources, and policy support to move from experimentation to full implementation.

Corporate legal teams in 2026

In-house legal teams are being pushed beyond traditional advisory roles into strategic business functions spanning contracts, compliance, governance, and risk. Artificial Intelligence is increasingly central to that shift, especially in high-volume workflows such as contract review, due diligence, and regulatory monitoring.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.