Artificial intelligence in the dock: should machines have legal rights?

Recent multi-billion pound investment into the UK's Artificial Intelligence infrastructure has refocused attention on regulation. The Law Commission's discussion paper on "Artificial Intelligence and the law" asks whether existing frameworks can address liability and accountability, and whether some form of legal personality for machines should be considered.

Published on December 3, 2025, the article reviews the UK position on Artificial Intelligence regulation following recent multi-billion pound investment into the country’s infrastructure and scientific research. It notes that, unlike the European Union’s AI Act, the UK has no dedicated Artificial Intelligence Act and has preferred a decentralised, principles-based approach that relies on existing regulators such as the CMA, FCA, ICO and Ofcom. The Law Commission has issued a discussion paper, titled “AI and the Law”, and launched a project limited to public sector uses of Artificial Intelligence and automated decision-making. The Commission cannot make law but its recommendations often inform government policy.

The Law Commission paper aims to raise awareness of legal risks and to prompt wider discussion rather than to propose detailed reforms. It revisits long-standing questions about whether granting some form of legal personality to Artificial Intelligence systems could close so-called liability gaps. The paper highlights core challenges: the autonomy and adaptiveness of systems that learn and change post-deployment, the difficulty of establishing factual and legal causation and mens rea when outputs are unpredictable, and the opacity of models protected by proprietary rights or technical complexity. It also addresses oversight and over-reliance concerns in regulated professions and public decision-making, and training and data issues, including copyright and personal data protection.

The article concludes that the Law Commission’s paper is a measured starting point that organises key issues and identifies areas for further policy work. It flags that the suggestion of legal personality raises complex criteria questions such as how to define autonomy thresholds and design accountability mechanisms. Without follow-up work that converts the discussion into clear priorities and concrete proposals, uncertainty will persist. Developments in the EU and elsewhere will continue to be watched closely as domestic and global regulatory responses to Artificial Intelligence evolve.

55

Impact Score

Have large language models plateaued

A Hacker News thread debates whether large language models have plateaued or whether recent gains come from better tooling and applications, with autonomous Artificial Intelligence agents showing striking demos and notable failures.

China eyes chip-stacking to narrow gap with NVIDIA

Wei Shaojun said China could narrow its technology gap with NVIDIA by stacking 14 nm logic chips with 18 nm DRAM and new compute architectures. The approach is aimed at improving Artificial Intelligence performance and energy efficiency while relying on a fully domestic supply chain.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.