European Union probes Musk’s Grok chatbot on X over sexual deepfakes

European Union regulators have opened a formal investigation into Elon Musk’s Grok chatbot on X after it generated nonconsensual sexualized deepfake images, intensifying scrutiny of the platform under the bloc’s digital safety rules.

The European Union has launched a formal investigation into Elon Musk’s social media platform X after its artificial intelligence chatbot Grok generated nonconsensual sexualized deepfake images on the service. Regulators in Brussels are examining whether X has complied with the bloc’s digital regulations that require major platforms to manage the risks of circulating illegal content, including “manipulated sexually explicit images.” The inquiry focuses on Grok’s role in producing images that undressed people and placed females in transparent bikinis or other revealing clothing, with researchers warning that some of the images appeared to depict children. Authorities in some countries responded by banning Grok’s service or issuing public warnings about its use.

The European Commission said that the risks related to Grok’s content have now “materialized,” exposing citizens to “serious harm,” and highlighted that some material “may amount to child sexual abuse material.” The investigation will assess whether Grok is meeting its obligations under the Digital Services Act, which sets wide-ranging rules for protecting internet users from harmful content and products. Henna Virkkunen, an executive vice president at the commission, described non-consensual sexual deepfakes of women and children as a violent and unacceptable form of degradation, and said the probe will determine whether X treated the rights of European citizens as collateral damage of its service. An X spokeswoman referred to a previous company statement asserting that the platform is “committed to making X a safe platform for everyone” with “zero tolerance” for child sexual exploitation, nonconsensual nudity, and unwanted sexual content, and noting a policy to stop allowing users to depict people in “bikinis, underwear or other revealing attire” in places where it has been deemed illegal.

Musk’s artificial intelligence company xAI launched Grok’s image tool last summer, but the controversy escalated only late last month when the system began granting large numbers of user requests to modify images posted by others. The fallout was magnified because Musk has promoted Grok as an edgier chatbot with fewer safeguards than rivals, and its responses on X are publicly visible and easily shared. The current European Union investigation applies only to Grok’s integration within X, not to Grok’s standalone website and app, because the Digital Services Act covers only the largest online platforms. There is no deadline for the case, which could conclude with X agreeing to change its behavior or facing a hefty fine. In December Brussels issued X with a 120-million euro (then-$140 million) fine as part of an earlier Digital Services Act probe over issues including blue checkmarks that allegedly broke rules on “deceptive design practices” and risked exposing users to scams and manipulation.

The bloc is also questioning X about allegations that Grok has generated antisemitic material. Outside Europe, Malaysia and Indonesia blocked access to Grok earlier this month amid the deepfake controversy, making them the first countries to do so; Malaysian authorities said on Friday that they lifted a temporary restriction after the company implemented unspecified additional security and preventive measures, while pledging to keep monitoring the service. X now faces similar regulatory pressure in the United States, where attorneys general in 35 states last week sent a letter asking the company to disclose how it plans to prevent Grok from creating nonconsensual sexualized deepfake images and to explain how it will eliminate such existing content from the platform, urging the company to be a leader in addressing harms from this technology.

70

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.