Eu parliament backs ban on Artificial Intelligence nudifier apps

European parliament committees have endorsed changes to the Artificial Intelligence Act that would ban apps used to create non-consensual nude or sexually explicit images of real people. Lawmakers also backed delays and targeted adjustments to compliance rules for high-risk systems and watermarking requirements.

European Union lawmakers moved to ban so-called nudifier apps, systems that use Artificial Intelligence to alter images of real people into nude or sexually explicit versions without consent. The change was adopted by the European Parliament’s committees on the Internal Market and Consumer Protection and on Civil Liberties and Justice on 18 March with 101 votes in favour, 9 against, and 8 abstentions. The proposal forms part of the ongoing review of the Artificial Intelligence Act, the European Union’s main regulation on the use of Artificial Intelligence.

The push accelerated after controversy around Grok, the assistant on X, which was linked to the mass generation of content that virtually undressed real people without their knowledge. According to data released by the NGO Centre for Countering Digital Hate, GROK is said to have produced 3 million sexually explicit images and 20,000 artificial reproductions of child sexual abuse over an eleven-day period between late 2025 and early 2026, before the platform controlled by Elon Musk announced measures to combat their spread. Lawmakers framed the ban more broadly than a response to a single platform, presenting it as a measure that had been strongly demanded by the public.

The committees also approved wider amendments intended to make the Artificial Intelligence Act simpler and more flexible for businesses. They supported delaying some obligations for high-risk systems, including those using biometrics or operating in sectors such as infrastructure and healthcare. Current legislation requires companies to comply with the new rules by 2 August this year, but lawmakers argued that the definition of key standards is unlikely to be ready by that date. They proposed 2 December 2027 for high-risk Artificial Intelligence systems listed directly in the law, and 2 August 2028 for systems already covered by other European Union safety and market surveillance rules.

On transparency rules, lawmakers supported more time for compliance with watermarking requirements for Artificial Intelligence-generated content, but with a shorter delay than the European Commission had proposed. The committees suggested an extension until 20 November 2026 instead of 2 February 2027. Other changes include extending support measures beyond small and medium-sized enterprises to Small Mid-Cap Enterprises, easing obligations for products already regulated under sector-specific European laws, and allowing companies in limited cases and with safeguards to use personal data to identify and correct bias in Artificial Intelligence systems. The amendments will go to a plenary vote in Strasbourg on 26 March, after which negotiations with Member States in the Council of the European Union may begin.

68

Impact Score

Chancellor sets principles for UK-EU alignment

Rachel Reeves has outlined a growth plan built around closer UK-EU ties, faster Artificial Intelligence adoption, and stronger regional development. The strategy sets new principles for regulatory alignment, expands support for innovation, and shifts more investment power to city regions.

Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is “totally false,” even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.