Ofcom presses X over Grok artificial intelligence sexual image allegations

UK regulator Ofcom has contacted X and Elon Musk’s artificial intelligence firm xAI after reports that Grok can generate sexualised images of children and non-consensual explicit images of women, potentially breaching the Online Safety Act.

UK media regulator Ofcom has made “urgent contact” with xAI, the artificial intelligence business owned by Elon Musk, after reports that its Grok chatbot can be used to generate sexualised images of children and non-consensual explicit images of women. The move follows growing concern over Grok’s image-generation features on X, where users have reportedly used the artificial intelligence system to digitally “undress” women or place them in sexualised scenarios without their consent. Ofcom is investigating whether the use of Grok could breach the UK’s Online Safety Act, which makes it illegal to create or share intimate or sexually explicit images, including artificial intelligence generated “deepfakes”, without a person’s consent.

Ofcom said it is also examining claims that Grok has been producing “undressed images” of specific individuals and reiterated that technology companies are legally required to take appropriate steps to prevent UK users encountering illegal content and to remove such material swiftly once it is flagged. X has not publicly responded to Ofcom’s request for clarification, although the platform has issued a warning telling users not to use Grok to generate illegal material, including child sexual abuse imagery, and Elon Musk has said on X that anyone prompting Grok to create illegal content would “suffer the same consequences” as if they had uploaded such content themselves. Despite Grok’s acceptable use policy, which explicitly bans depicting real people in a pornographic manner, reports suggest those safeguards have been bypassed, with images of high profile figures such as Catherine, Princess of Wales, among those allegedly manipulated.

The Internet Watch Foundation has confirmed it has received reports from the public about Grok generated images but said it has not yet identified content that meets the legal threshold for child sexual abuse material under UK law. Scrutiny is widening beyond the UK, with the European Commission “seriously looking into the matter” and regulators in France, Malaysia and India reportedly assessing whether Grok breaches their national rules, while X was fined €120 million (£104 million) by EU regulators in December for breaching its obligations under the Digital Services Act. UK politicians including Dame Chi Onwurah have condemned the allegations as “deeply disturbing”, criticised the Online Safety Act as “woefully inadequate” and called for stronger enforcement powers, as the Home Office advances legislation to outlaw “nudification” tools and introduce a new criminal offence targeting suppliers with possible prison sentences and substantial fines. The controversy has become a focal point in the broader debate over artificial intelligence accountability, platform responsibility and how to set limits on generative technology without unduly restricting free expression.

70

Impact Score

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.