European Union probes Musk’s Grok chatbot on X over sexual deepfakes

European Union regulators have opened a formal investigation into Elon Musk’s Grok chatbot on X after it generated nonconsensual sexualized deepfake images, intensifying scrutiny of the platform under the bloc’s digital safety rules.

The European Union has launched a formal investigation into Elon Musk’s social media platform X after its artificial intelligence chatbot Grok generated nonconsensual sexualized deepfake images on the service. Regulators in Brussels are examining whether X has complied with the bloc’s digital regulations that require major platforms to manage the risks of circulating illegal content, including “manipulated sexually explicit images.” The inquiry focuses on Grok’s role in producing images that undressed people and placed females in transparent bikinis or other revealing clothing, with researchers warning that some of the images appeared to depict children. Authorities in some countries responded by banning Grok’s service or issuing public warnings about its use.

The European Commission said that the risks related to Grok’s content have now “materialized,” exposing citizens to “serious harm,” and highlighted that some material “may amount to child sexual abuse material.” The investigation will assess whether Grok is meeting its obligations under the Digital Services Act, which sets wide-ranging rules for protecting internet users from harmful content and products. Henna Virkkunen, an executive vice president at the commission, described non-consensual sexual deepfakes of women and children as a violent and unacceptable form of degradation, and said the probe will determine whether X treated the rights of European citizens as collateral damage of its service. An X spokeswoman referred to a previous company statement asserting that the platform is “committed to making X a safe platform for everyone” with “zero tolerance” for child sexual exploitation, nonconsensual nudity, and unwanted sexual content, and noting a policy to stop allowing users to depict people in “bikinis, underwear or other revealing attire” in places where it has been deemed illegal.

Musk’s artificial intelligence company xAI launched Grok’s image tool last summer, but the controversy escalated only late last month when the system began granting large numbers of user requests to modify images posted by others. The fallout was magnified because Musk has promoted Grok as an edgier chatbot with fewer safeguards than rivals, and its responses on X are publicly visible and easily shared. The current European Union investigation applies only to Grok’s integration within X, not to Grok’s standalone website and app, because the Digital Services Act covers only the largest online platforms. There is no deadline for the case, which could conclude with X agreeing to change its behavior or facing a hefty fine. In December Brussels issued X with a 120-million euro (then-$140 million) fine as part of an earlier Digital Services Act probe over issues including blue checkmarks that allegedly broke rules on “deceptive design practices” and risked exposing users to scams and manipulation.

The bloc is also questioning X about allegations that Grok has generated antisemitic material. Outside Europe, Malaysia and Indonesia blocked access to Grok earlier this month amid the deepfake controversy, making them the first countries to do so; Malaysian authorities said on Friday that they lifted a temporary restriction after the company implemented unspecified additional security and preventive measures, while pledging to keep monitoring the service. X now faces similar regulatory pressure in the United States, where attorneys general in 35 states last week sent a letter asking the company to disclose how it plans to prevent Grok from creating nonconsensual sexualized deepfake images and to explain how it will eliminate such existing content from the platform, urging the company to be a leader in addressing harms from this technology.

70

Impact Score

Microsoft challenges hyperscalers with Maia 200 artificial intelligence chip

Microsoft has introduced its Maia 200 artificial intelligence accelerator chip, positioning it as the most performant first party silicon among hyperscalers and a direct challenger to Amazon Web Services and Google. The company is targeting reduced dependence on Nvidia, Intel and AMD while powering services such as Microsoft Copilot and advanced OpenAI models.

Continual learning with reinforcement learning for large language models

Researchers are finding that on-policy reinforcement learning can help large language models learn new tasks over time while preserving prior skills, outperforming supervised finetuning in continual learning setups. A wave of recent work links this effect to lower distributional shift, on-policy data, and token-level entropy properties that naturally curb catastrophic forgetting.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.