United Kingdom artificial intelligence regulatory outlook for February 2026

United Kingdom regulators are reshaping artificial intelligence governance through reforms to data protection, new criminal offences for deepfakes, and emerging guidance on agentic systems and content labelling, while aligning with evolving European Union rules.

The United Kingdom government has published a one-year progress review of the Artificial Intelligence Opportunities Action Plan, which originally set out 50 commitments to drive Artificial Intelligence development and adoption. The government says that it has fulfilled its commitments in respect of 38 of the 50 actions, with detailed progress available via a public dashboard. Delivered measures include the designation of five Artificial Intelligence “growth zones”, establishment of the Sovereign Artificial Intelligence Unit, a pilot creative content exchange to scale licensing of digitised assets, and guidelines for preparing government datasets for Artificial Intelligence use. Ongoing work spans regulator coordination, regulatory sandboxes and a call for evidence on Artificial Intelligence growth labs that would allow companies to test Artificial Intelligence products in supervised real-world conditions with temporarily relaxed regulations. Reform of the United Kingdom text and data mining regime and a full report on copyright and Artificial Intelligence, due by 18 March 2026, remain key open commitments.

Policy around Artificial Intelligence and copyright is still unsettled. Technology secretary Liz Kendall and culture secretary Lisa Nandy told the House of Lords that the government is “having a genuine reset moment” to balance creative industry interests with Artificial Intelligence opportunities and has not chosen a preferred model. They acknowledged urgency but rejected rushing decisions, with Ms Nandy noting there is currently no “workable opt-out proposal on the table”. In parallel, the Data (Use and Access) Act 2025 is reshaping data protection rules for automated decision-making. On 5 February, section 80 of the DUA Act replaced article 22 of the United Kingdom GDPR, softening the previous default prohibition on automated individual decision-making while introducing a specific restriction for decisions involving special category data and for significant decisions based solely on recognised legitimate interests. Controllers must ensure safeguards such as transparency, opportunities for representations, human intervention and the ability to contest decisions, which provide more targeted yet flexible regulation of automated processing.

Legislators are also responding to harms from synthetic media and advanced Artificial Intelligence architectures. New regulations under the DUA Act 2025 created a criminal offence for the creation or commissioning of non-consensual intimate images, including deepfakes, which came into force on 6 February, and the government plans to designate this as a priority offence under the Online Safety Act 2023. The Information Commissioner’s Office has issued a tech futures paper on “agentic Artificial Intelligence” that highlights tension between broad data access and purpose limitation, risks from rapid inference of new personal data, complex multi-agent data flows, and the amplification of existing generative Artificial Intelligence issues, while flagging potential compliance use cases and seeking industry engagement ahead of a statutory code of practice. A House of Commons Library briefing on Artificial Intelligence content labelling reviews visible disclaimers and invisible watermarks, notes that there is currently no United Kingdom legislation requiring Artificial Intelligence-generated content to be labelled, and contrasts this with article 50 of the European Union Artificial Intelligence Act, which sets transparency rules for content produced by generative Artificial Intelligence and is being supplemented by a forthcoming European Commission code of practice. The government has also launched a call for information, closing on 28 February, to gather views on secure Artificial Intelligence computing systems, while European data protection bodies have issued a joint opinion on the proposed Digital Omnibus on Artificial Intelligence, supporting implementation simplification but warning against any dilution of fundamental rights protections.

58

Impact Score

Artificial Intelligence speeds quantum encryption threat timeline

Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.

New methods aim to improve Large Language Model reasoning

A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.