Meta draws a line on EU artificial intelligence rules, signaling a rift in tech oversight

Meta declined to sign the European Union’s voluntary code of practice for general-purpose Artificial Intelligence, arguing it overreaches the forthcoming Artificial Intelligence Act. The move underscores widening tensions over how far Brussels should go to govern frontier models.

Meta has formally refused to sign the European Union’s new voluntary code of practice for general-purpose Artificial Intelligence systems. Published earlier this month, the code is intended to help companies prepare for the EU’s Artificial Intelligence Act, a sweeping regulation scheduled to take effect on August 2, 2025. The Act aims to increase transparency, reduce risk, and establish standards for the development and deployment of Artificial Intelligence across the region.

In a public statement, Meta’s global affairs chief Joel Kaplan criticized the code as exceeding the scope of the Artificial Intelligence Act. He argued that the document introduces legal uncertainties and adds burdens for developers, warning that these measures could hinder the advancement of powerful Artificial Intelligence technologies. Kaplan also cautioned that the framework might limit European businesses’ ability to build products on top of such models, potentially slowing innovation tied to next-generation tools.

European regulators view the Artificial Intelligence Act and the accompanying code as cornerstones of a broader strategy to lead in responsible Artificial Intelligence governance. The Commission’s goals include embedding safeguards such as data accountability and ethical design into development practices. With the Act targeting models that pose systemic risk, the EU hopes to set an international benchmark for safety and trust, potentially influencing standards well beyond its borders.

The industry response reflects a growing divide. Several high-profile companies, including Airbus and ASML, signed a joint letter urging the Commission to delay the code by two years, citing concerns about timing and scope. At the same time, OpenAI has committed to the framework, highlighting differing views on whether the EU’s approach represents prudent guardrails or regulatory overreach. Meta’s refusal adds momentum to skepticism among some developers and enterprise stakeholders.

Why it matters: The clash spotlights the global debate over balancing innovation with oversight in Artificial Intelligence. Europe’s push to formalize accountability and safety is colliding with concerns from major technology firms about legal ambiguity and operational constraints. As the Artificial Intelligence Act nears enforcement, the EU’s strategy faces a pivotal test, and the risk of fragmented, region-specific rulebooks grows if large platforms and regulators cannot align on common standards.

68

Impact Score

ChatGPT Images adds thinking capability

OpenAI has upgraded ChatGPT Images with a new thinking mode that can search the internet, generate multiple images, and verify outputs before finalizing results. The update also improves text rendering, dense compositions, multilingual support, and style flexibility.

OpenAI launches workspace agents in ChatGPT

OpenAI has introduced workspace agents in ChatGPT, giving teams shared Codex-powered agents that can handle multi-step work across business tools and Slack. The feature is aimed at recurring organizational workflows with admin controls, approvals, and enterprise monitoring.

SpaceX gains option to buy Artificial Intelligence coding startup Cursor

SpaceX and Cursor are deepening their partnership around coding models and compute, with an acquisition option that could reshape Cursor’s enterprise positioning. The arrangement raises immediate questions about model neutrality, data contracts, and future access to third-party models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.