Meta draws a line on EU artificial intelligence rules, signaling a rift in tech oversight

Meta declined to sign the European Union’s voluntary code of practice for general-purpose Artificial Intelligence, arguing it overreaches the forthcoming Artificial Intelligence Act. The move underscores widening tensions over how far Brussels should go to govern frontier models.

Meta has formally refused to sign the European Union’s new voluntary code of practice for general-purpose Artificial Intelligence systems. Published earlier this month, the code is intended to help companies prepare for the EU’s Artificial Intelligence Act, a sweeping regulation scheduled to take effect on August 2, 2025. The Act aims to increase transparency, reduce risk, and establish standards for the development and deployment of Artificial Intelligence across the region.

In a public statement, Meta’s global affairs chief Joel Kaplan criticized the code as exceeding the scope of the Artificial Intelligence Act. He argued that the document introduces legal uncertainties and adds burdens for developers, warning that these measures could hinder the advancement of powerful Artificial Intelligence technologies. Kaplan also cautioned that the framework might limit European businesses’ ability to build products on top of such models, potentially slowing innovation tied to next-generation tools.

European regulators view the Artificial Intelligence Act and the accompanying code as cornerstones of a broader strategy to lead in responsible Artificial Intelligence governance. The Commission’s goals include embedding safeguards such as data accountability and ethical design into development practices. With the Act targeting models that pose systemic risk, the EU hopes to set an international benchmark for safety and trust, potentially influencing standards well beyond its borders.

The industry response reflects a growing divide. Several high-profile companies, including Airbus and ASML, signed a joint letter urging the Commission to delay the code by two years, citing concerns about timing and scope. At the same time, OpenAI has committed to the framework, highlighting differing views on whether the EU’s approach represents prudent guardrails or regulatory overreach. Meta’s refusal adds momentum to skepticism among some developers and enterprise stakeholders.

Why it matters: The clash spotlights the global debate over balancing innovation with oversight in Artificial Intelligence. Europe’s push to formalize accountability and safety is colliding with concerns from major technology firms about legal ambiguity and operational constraints. As the Artificial Intelligence Act nears enforcement, the EU’s strategy faces a pivotal test, and the risk of fragmented, region-specific rulebooks grows if large platforms and regulators cannot align on common standards.

68

Impact Score

A blueprint for implementing RAG at scale

Retrieval-augmented generation is positioned as essential for most large language model applications because it injects company-specific knowledge into responses. For organizations rolling out generative Artificial Intelligence, the approach promises higher accuracy and fewer hallucinations.

How artificial intelligence will accelerate biomedical research and discovery

A Microsoft Research Podcast episode brings together Daphne Koller, Noubar Afeyan, and Eric Topol to examine how artificial intelligence is reshaping biomedicine, from target discovery and autonomous labs to the pursuit of a virtual cell. The discussion charts rapid progress since GPT-4 and what it means for patients, researchers, and regulators.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.