The UK government has set out a cautious position on copyright and Artificial Intelligence, declining to reform copyright law for now and stepping back from a broad copyright exception with an opt-out mechanism as its preferred approach. It plans to gather further evidence on how copyright law is affecting the development and deployment of Artificial Intelligence, while working with industry and experts on best practice for input transparency and the labelling of Artificial Intelligence-generated content. It also proposes not to intervene in the licensing market at this stage, although market-led approaches will remain under review.
The government also signalled a broader shift toward protecting human creativity and addressing new risks from synthetic content. It proposes removing the specific copyright protection for wholly computer-generated works, while maintaining protection for works created with Artificial Intelligence assistance. It is also exploring options to address risks from realistic impersonation and digital replicas, including whether a new personality right may be appropriate. On enforcement, it plans to continue working with law enforcement and the judiciary, identify enforcement barriers and consider regulatory oversight of transparency measures if legislation is introduced, but no new regulator is proposed at this stage.
Pressure is also coming from Parliament. The House of Lords Communications and Digital Committee has urged the government to reject a new commercial text and data mining exception with an opt-out model and instead strengthen licensing, transparency and enforcement within the existing framework. It recommended that the government publishes a final, evidence-based decision on its approach to Artificial Intelligence and copyright in the next 12 months. The committee also called for safeguards against unauthorised digital replicas, mandatory transparency for large Artificial Intelligence developers on training data, support for a sustainable licensing market, and technical standards for rights reservation, provenance and labelling.
In the EU, the European Commission has published a second draft of its voluntary code of practice on marking and labelling Artificial Intelligence-generated content. The code is intended to help providers and deployers meet transparency obligations under Article 50 of the EU Artificial Intelligence Act. The Commission says the new draft is more streamlined and flexible, promotes open standards and an EU icon for labelling, and is open for feedback until 30 March 2026. It is expected to be finalised by the beginning of June this year, and the transparency obligations are set to become applicable on 2 August 2026, subject to proposed changes in the Digital Omnibus on Artificial Intelligence.
Broader EU reform is moving quickly. Discussions continue on the wider Digital Omnibus Regulation, while the Digital Omnibus on Artificial Intelligence is advancing through the legislative process, with the Council having adopted its position and the European Parliament nearing its own. Separately, the European Parliament has adopted a resolution calling for a supplementary legal framework for licensing copyrighted material used in generative Artificial Intelligence, stronger transparency, fair compensation for rightsholders and support for voluntary collective licensing agreements. It also said that fully generated Artificial Intelligence content that does not meet the criteria for copyright protection should remain ineligible for protection.
International regulators are also focusing on privacy harms from synthetic media. Data protection authorities including the UK Information Commissioner’s Office and the European Data Protection Board issued a joint statement on Artificial Intelligence-generated imagery, warning about realistic images and videos of real people being created without consent. The statement calls for robust safeguards against misuse of personal information, meaningful transparency about system capabilities and safeguards, accessible removal mechanisms for harmful content, and stronger protections for children. Regulators are urging organisations to build these safeguards in from the outset and engage proactively with oversight bodies.
