European and UK policy activity around Artificial Intelligence accelerated across February and March 2026, with new measures spanning content labelling, copyright, child safety, research investment, and governance for agentic systems. In the EU, the European Commission issued a second draft of a voluntary Code of Practice on Marking and Labelling of Artificial Intelligence-generated content, designed to support multi-layered marking and clearer labelling for outputs including deepfakes while easing compliance burdens for providers and deployers. The European Parliament also passed a draft resolution on copyright and generative Artificial Intelligence that reinforces creators’ rights, highlights unlawful use of copyrighted works in training, and backs fair remuneration, stronger bargaining power for rights holders, and a coherent EU licensing framework.
In the UK, the government opened a consultation on stronger online protections for children, including possible limits on social media access, restrictions on addictive platform features, and tighter age verification. The consultation remains open until 26 May 2026, supported by live pilots ensuring decisions are grounded in real world evidence. The UK government has also unveiled plans to establish a new £40 million artificial intelligence research lab. The Department for Science, Innovation and Technology said the lab will focus on advanced frontier models with an emphasis on safety, transparency, and societal benefit, while also drawing on partnerships with industry and academia. A call for proposals is open until 31 March 2026.
Copyright policy is becoming a sharper point of debate in both the UK and France. The UK House of Lords Communications and Digital Committee set out a licensing-led path that would require transparency over training data and fair payment to creators, while rejecting a commercial text and data mining exception. It also called for new protections against unauthorised digital replicas and “in the style of” outputs, plus a mandatory transparency framework for training data. In France, the Senate introduced a draft bill on 12 December 2025 that would create a legal presumption that copyrighted cultural content has been exploited by an Artificial Intelligence system when credible indications make that use plausible. The Senate will publicly discuss the Draft Bill on 8 April 2026.
Governance of agentic systems is also emerging as a core concern. On 22 January 2026, Singapore announced a new Model Artificial Intelligence Governance Framework for Agentic Artificial Intelligence through the Ministry for Digital Development and Information and the Infocomm Media Development Authority. In Europe, data protection authorities are taking a harder line. The UK Information Commissioner warned that agentic Artificial Intelligence is likely to create significant privacy risks, especially where systems gain broad access to mailboxes, shared drives, or live databases. From an EU perspective, the Dutch data protection authority advised businesses and consumers not to use autonomous Artificial Intelligence agents, calling them a “Trojan horse” vulnerable to misuse, while Italy’s regulator also warned that these tools present heightened risks compared with traditional prompt based models. Regulators made clear that organisations remain responsible under GDPR and UK GDPR and may need appropriate safeguards and, in many cases, a Data Protection Impact Assessment before deployment.
