Meta has formally refused to sign the European Union’s new voluntary code of practice for general-purpose Artificial Intelligence systems. Published earlier this month, the code is intended to help companies prepare for the EU’s Artificial Intelligence Act, a sweeping regulation scheduled to take effect on August 2, 2025. The Act aims to increase transparency, reduce risk, and establish standards for the development and deployment of Artificial Intelligence across the region.
In a public statement, Meta’s global affairs chief Joel Kaplan criticized the code as exceeding the scope of the Artificial Intelligence Act. He argued that the document introduces legal uncertainties and adds burdens for developers, warning that these measures could hinder the advancement of powerful Artificial Intelligence technologies. Kaplan also cautioned that the framework might limit European businesses’ ability to build products on top of such models, potentially slowing innovation tied to next-generation tools.
European regulators view the Artificial Intelligence Act and the accompanying code as cornerstones of a broader strategy to lead in responsible Artificial Intelligence governance. The Commission’s goals include embedding safeguards such as data accountability and ethical design into development practices. With the Act targeting models that pose systemic risk, the EU hopes to set an international benchmark for safety and trust, potentially influencing standards well beyond its borders.
The industry response reflects a growing divide. Several high-profile companies, including Airbus and ASML, signed a joint letter urging the Commission to delay the code by two years, citing concerns about timing and scope. At the same time, OpenAI has committed to the framework, highlighting differing views on whether the EU’s approach represents prudent guardrails or regulatory overreach. Meta’s refusal adds momentum to skepticism among some developers and enterprise stakeholders.
Why it matters: The clash spotlights the global debate over balancing innovation with oversight in Artificial Intelligence. Europe’s push to formalize accountability and safety is colliding with concerns from major technology firms about legal ambiguity and operational constraints. As the Artificial Intelligence Act nears enforcement, the EU’s strategy faces a pivotal test, and the risk of fragmented, region-specific rulebooks grows if large platforms and regulators cannot align on common standards.