United Kingdom artificial intelligence regulatory outlook for February 2026

United Kingdom regulators are reshaping artificial intelligence governance through reforms to data protection, new criminal offences for deepfakes, and emerging guidance on agentic systems and content labelling, while aligning with evolving European Union rules.

The United Kingdom government has published a one-year progress review of the Artificial Intelligence Opportunities Action Plan, which originally set out 50 commitments to drive Artificial Intelligence development and adoption. The government says that it has fulfilled its commitments in respect of 38 of the 50 actions, with detailed progress available via a public dashboard. Delivered measures include the designation of five Artificial Intelligence “growth zones”, establishment of the Sovereign Artificial Intelligence Unit, a pilot creative content exchange to scale licensing of digitised assets, and guidelines for preparing government datasets for Artificial Intelligence use. Ongoing work spans regulator coordination, regulatory sandboxes and a call for evidence on Artificial Intelligence growth labs that would allow companies to test Artificial Intelligence products in supervised real-world conditions with temporarily relaxed regulations. Reform of the United Kingdom text and data mining regime and a full report on copyright and Artificial Intelligence, due by 18 March 2026, remain key open commitments.

Policy around Artificial Intelligence and copyright is still unsettled. Technology secretary Liz Kendall and culture secretary Lisa Nandy told the House of Lords that the government is “having a genuine reset moment” to balance creative industry interests with Artificial Intelligence opportunities and has not chosen a preferred model. They acknowledged urgency but rejected rushing decisions, with Ms Nandy noting there is currently no “workable opt-out proposal on the table”. In parallel, the Data (Use and Access) Act 2025 is reshaping data protection rules for automated decision-making. On 5 February, section 80 of the DUA Act replaced article 22 of the United Kingdom GDPR, softening the previous default prohibition on automated individual decision-making while introducing a specific restriction for decisions involving special category data and for significant decisions based solely on recognised legitimate interests. Controllers must ensure safeguards such as transparency, opportunities for representations, human intervention and the ability to contest decisions, which provide more targeted yet flexible regulation of automated processing.

Legislators are also responding to harms from synthetic media and advanced Artificial Intelligence architectures. New regulations under the DUA Act 2025 created a criminal offence for the creation or commissioning of non-consensual intimate images, including deepfakes, which came into force on 6 February, and the government plans to designate this as a priority offence under the Online Safety Act 2023. The Information Commissioner’s Office has issued a tech futures paper on “agentic Artificial Intelligence” that highlights tension between broad data access and purpose limitation, risks from rapid inference of new personal data, complex multi-agent data flows, and the amplification of existing generative Artificial Intelligence issues, while flagging potential compliance use cases and seeking industry engagement ahead of a statutory code of practice. A House of Commons Library briefing on Artificial Intelligence content labelling reviews visible disclaimers and invisible watermarks, notes that there is currently no United Kingdom legislation requiring Artificial Intelligence-generated content to be labelled, and contrasts this with article 50 of the European Union Artificial Intelligence Act, which sets transparency rules for content produced by generative Artificial Intelligence and is being supplemented by a forthcoming European Commission code of practice. The government has also launched a call for information, closing on 28 February, to gather views on secure Artificial Intelligence computing systems, while European data protection bodies have issued a joint opinion on the proposed Digital Omnibus on Artificial Intelligence, supporting implementation simplification but warning against any dilution of fundamental rights protections.

58

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.