European Union investigates X over sexualized Artificial Intelligence images and safety risks

European Union regulators have opened formal proceedings against X over how it handled sexualized Artificial Intelligence images of a child and broader systemic risks from its Artificial Intelligence features. The case will test the bloc’s new digital rules and X’s responsibilities for moderation and user protection.

European Union regulators have launched a formal inquiry into X over its handling of sexualized Artificial Intelligence images of a child and its broader approach to Artificial Intelligence driven features on the platform. Officials are examining whether X complied with the bloc’s new digital regulations, which impose strict duties on large online platforms to manage systemic risks, protect minors, and respond quickly to reports of illegal or harmful content. The investigation focuses on how sexualized Artificial Intelligence images were generated, circulated and moderated, and whether X had adequate processes in place to detect and address such material before and after users flagged it.

The European Commission is using its powers under the Digital Services Act to request detailed information from X about its Artificial Intelligence systems, content recommendation tools, and safeguards for children. Regulators are assessing whether the company sufficiently evaluated and mitigated the “systemic risks” arising from integrating generative Artificial Intelligence into its service, including the potential for large scale distribution of manipulated images that target minors. Officials are also reviewing how quickly X responded to specific reports about the sexualized Artificial Intelligence images, what internal escalation steps were taken, and whether law enforcement was notified in a timely and appropriate manner.

The outcome of the case could have significant implications for how major platforms deploy Artificial Intelligence features in the European Union and the level of oversight they must apply to prevent misuse. If the commission finds serious or repeated violations of the Digital Services Act, X could face substantial penalties and binding orders to change its systems and policies. The inquiry adds to mounting regulatory pressure on large technology companies over the safety, transparency and governance of Artificial Intelligence tools, and will serve as an early test of how the European Union intends to enforce its new digital rulebook in situations involving vulnerable users and high risk content.

70

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.