The UK government has postponed plans to regulate artificial intelligence by at least a year, as ministers pursue a broader legislative package that will address a range of concerns including technology safety and the use of copyrighted material by artificial intelligence developers. Technology secretary Peter Kyle aims to introduce a ´comprehensive´ bill in the next parliamentary session, with the legislation now unlikely to be ready before the next king’s speech, which sources suggest could occur by May 2026. This delay has raised alarm among those worried about the growing influence of artificial intelligence and the absence of firm regulatory oversight in the rapidly advancing sector.
Initially, the Labour government had proposed a narrowly focused artificial intelligence bill that would have quickly established requirements for large language model developers—such as makers of ChatGPT—to provide their systems for testing by the country’s artificial intelligence Security Institute. This measure was meant to curb risks associated with increasingly capable artificial intelligence models. However, ministers delayed this initiative out of concern it might diminish the UK´s appeal to artificial intelligence companies and to coordinate with regulatory developments under the new US administration.
The push for a more substantive bill comes amid escalating friction with the creative sector over copyright protections. Current proposals, being contested in the House of Lords as part of a separate data bill, would allow artificial intelligence firms to train their systems on copyrighted works unless rights holders specifically opt out. Prominent artists such as Elton John and Paul McCartney have joined campaigns opposing the government´s position. An amendment requiring artificial intelligence companies to disclose the use of copyrighted data for training has gained traction in the Lords, but ministers remain reluctant to enforce additional obligations, maintaining that the data bill is not the proper forum for such changes. Instead, they pledge to use the forthcoming artificial intelligence bill to find a solution, vowing to consult cross-party parliamentarians and the creative industry, and to publish technical assessments on economic and copyright impacts.
Recent polling by the Ada Lovelace Institute and the Alan Turing Institute reveals that an overwhelming majority of UK citizens support government intervention to halt dangerous artificial intelligence products and want public sector regulators to oversee safety issues. Experts observe that the UK is navigating a middle course between the more stringent European Union approach and the lighter-touch model favored by the US, with policymakers seeking to encourage innovation while also protecting consumers and creative professionals from potential harms posed by artificial intelligence technologies.