Artificial intelligence in the dock: should machines have legal rights?

Recent multi-billion pound investment into the UK's Artificial Intelligence infrastructure has refocused attention on regulation. The Law Commission's discussion paper on "Artificial Intelligence and the law" asks whether existing frameworks can address liability and accountability, and whether some form of legal personality for machines should be considered.

Published on December 3, 2025, the article reviews the UK position on Artificial Intelligence regulation following recent multi-billion pound investment into the country’s infrastructure and scientific research. It notes that, unlike the European Union’s AI Act, the UK has no dedicated Artificial Intelligence Act and has preferred a decentralised, principles-based approach that relies on existing regulators such as the CMA, FCA, ICO and Ofcom. The Law Commission has issued a discussion paper, titled “AI and the Law”, and launched a project limited to public sector uses of Artificial Intelligence and automated decision-making. The Commission cannot make law but its recommendations often inform government policy.

The Law Commission paper aims to raise awareness of legal risks and to prompt wider discussion rather than to propose detailed reforms. It revisits long-standing questions about whether granting some form of legal personality to Artificial Intelligence systems could close so-called liability gaps. The paper highlights core challenges: the autonomy and adaptiveness of systems that learn and change post-deployment, the difficulty of establishing factual and legal causation and mens rea when outputs are unpredictable, and the opacity of models protected by proprietary rights or technical complexity. It also addresses oversight and over-reliance concerns in regulated professions and public decision-making, and training and data issues, including copyright and personal data protection.

The article concludes that the Law Commission’s paper is a measured starting point that organises key issues and identifies areas for further policy work. It flags that the suggestion of legal personality raises complex criteria questions such as how to define autonomy thresholds and design accountability mechanisms. Without follow-up work that converts the discussion into clear priorities and concrete proposals, uncertainty will persist. Developments in the EU and elsewhere will continue to be watched closely as domestic and global regulatory responses to Artificial Intelligence evolve.

55

Impact Score

OpenRouter highlights expanding roster of free artificial intelligence models

OpenRouter is expanding free access to high-end artificial intelligence models, aggregating open-weight and frontier systems from multiple providers under a single routing layer. The lineup targets agentic, long-context, multimodal, and code-centric workloads while keeping usage at $0/M input tokens and $0/M output tokens for listed models.

Physical artificial intelligence emerges as manufacturing’s next competitive edge

Manufacturers are moving beyond traditional automation toward physical artificial intelligence that can perceive, reason, and act in real factories, with Microsoft and NVIDIA positioning their technologies as the backbone for this shift. Trust, governance, and human oversight are presented as core requirements for scaling these systems safely.

Weird World column explores strange frontiers of science and society

Research in the Weird World: Science & Society section spans ethical risks of Artificial Intelligence therapy, ancient plagues decoded through DNA, climate shocks that reshaped civilizations, and other unconventional investigations at the edge of science and culture.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.