Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

The UK government has published a coordinated set of responses on copyright and Artificial Intelligence, including the Department for Science, Innovation and Technology’s report, an economic impact assessment, and a House of Lords committee report on Artificial Intelligence and the creative industries. Artificial Intelligence could add approximately £55-140 billion to UK Gross Value Added (GVA) by 2030. The UK’s creative industries accounted for 13% of UK services exports in 2023, reinforcing the political and economic sensitivity of any changes to copyright law.

The government has decided against immediate legal reform. It has stepped back from introducing a commercial text and data mining exception with an opt-out for rights holders, concluding that stakeholder views remain too divided and the evidence base is too limited. The current framework under the Copyright, Designs and Patents Act 1988 therefore remains in place, including the narrow text and data mining exception for non-commercial research. It has also confirmed that it will not introduce a new regulator or impose additional regulatory duties at this stage. That leaves Artificial Intelligence developers facing uncertainty over liability and future legal changes, while creatives and rightsholders remain concerned about enforcement and monetisation.

Policy attention is now shifting to licensing, transparency, and digital replicas. Licensing is expected to become central, with market-led and sector-specific models likely to emerge as businesses seek safer Artificial Intelligence tools and providers look to licensed datasets and stronger intellectual property protections. Transparency over training data and Artificial Intelligence-generated outputs remains under review, but the government is currently favouring international observation and industry-led standards over immediate statutory rules. Reform of digital replicas is treated as more urgent, particularly because UK law offers limited control over the use of an individual’s likeness, voice, or style in deepfakes. A further consultation is expected in summer 2026, alongside ongoing work by Ofcom around the Online Safety Act.

The government has also signalled that copyright protection for computer-generated works should be removed. Under section 9(3) of the Copyright, Designs and Patents Act 1988, works generated by a computer without a human author currently assign authorship to the person making the arrangements necessary for creation. The proposed direction would mean outputs without sufficient human effort and skill would no longer receive copyright protection, while Artificial Intelligence-assisted works involving human intellectual effort would still qualify, subject to any underlying third-party rights. The challenge will be applying that distinction as creative works increasingly combine human input with embedded Artificial Intelligence features.

With no legislative reset, courts and commercial practice are likely to define the near-term boundaries of lawful Artificial Intelligence training and output use. Cases such as Getty Images v Stability AI in the UK and Like Company v Google in the EU are expected to influence interpretation, although costly litigation offers limited clarity for SMEs and smaller rights holders. For now, businesses are being pushed toward auditing training data, strengthening contracts, considering licensing arrangements, improving transparency, and assessing risks around digital replicas. Creatives and rightsholders are being encouraged to review ownership, tighten terms with platforms and publishers, label works clearly, and prepare to enforce or license their rights in a still unsettled market.

68

Impact Score

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.