Artificial intelligence regulation: comparing the UK, EU and US frameworks

A side-by-side look at how the UK, EU, and US are regulating Artificial Intelligence—and what it means for organizations globally.

The regulatory landscape for artificial intelligence continues to evolve rapidly across key global jurisdictions, each taking notably distinct approaches. The United Kingdom is pursuing an innovation-oriented, principles-led model, outlined in a 2023 government white paper. Rather than establishing a single artificial intelligence regulator, the UK relies on existing bodies like the Competition and Markets Authority and the Financial Conduct Authority to extend their remit to artificial intelligence oversight. Coordination comes via several organizations, including the Department for Science, Innovation and Technology and the newly launched AI Security Institute. Although currently regulator-light with a preference for sectoral innovation, there are active proposals to centralize oversight; a Private Members´ Bill for an Artificial Intelligence authority is under debate in the House of Lords, aiming to align future UK regulation closer to the European Union’s framework.

The European Union stands apart with its comprehensive, prescriptive AI Act ((EU) 2024/1689), adopted in May 2024, that enforces uniform rules addressing systemic risks and fundamental rights. The Act is broad, covering all sectors, and relies on a combination of existing and new regulatory entities, including a central European AI Board and nationally appointed authorities. Additional sector-specific regulations, such as those on machinery safety and product liability, explicitly encompass artificial intelligence systems, compelling organizations to achieve compliance on both horizontal and vertical regulatory axes. The AI Act will largely take effect from August 2026, but the European Commission encourages organizations to proactively align with its provisions through the AI Pact initiative.

Meanwhile, the United States features a fragmented and unsettled regulatory environment. A recent executive order under President Trump reversed policies viewed as impeding artificial intelligence development and reinforced the country’s innovation-first philosophy. In May 2025, the ´One Big Beautiful Bill Act´ passed the House of Representatives, featuring both substantial investments in artificial intelligence and a proposed decade-long moratorium restricting state-level regulation in favor of national uniformity. This move threatens newly enacted state laws—such as Utah’s innovative generative artificial intelligence chatbot restrictions—leaving the efficacy and future of decentralized approaches uncertain across the US for the coming years.

Globally, despite high-level efforts at harmonization—exemplified by the G7’s Guiding Principles and the Bletchley Declaration—concrete convergence among the UK, EU, and US remains out of reach. Organizations operating internationally will have to navigate a patchwork of overlapping and sometimes conflicting artificial intelligence rules, underscoring the urgent need for tailored compliance strategies in a fast-shifting regulatory climate.

👍
0
❤️
0
👏
0
😂
0
🎉
0
🎈
0

77

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend