October Artificial Intelligence policy update: UK bets on beating US as EU gambles €1bn

Britain’s Artificial Intelligence minister says the UK can outpace the US on adoption while the European Commission directs €1bn to its Apply Artificial Intelligence strategy and regulators launch new sandboxes.

October’s policy roundup highlights competing national strategies and fresh regulatory moves shaping the development and deployment of Artificial Intelligence. In the United States, California governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, a first-of-its-kind law that requires frontier model developers to make transparency commitments and report safety incidents. Meanwhile, the European Commission launched the Apply Artificial Intelligence strategy to boost adoption and technological sovereignty, proposing a €1bn allocation that some critics say comes with unexamined trade-offs.

The United Kingdom is pursuing a pro-adoption stance. A new report from the Centre for Emerging Technology and Security and the Alan Turing Institute argues California’s frontier legislation could deliver second-order national security benefits for the UK. At the same time, the Centre for Long-Term Resilience finds the UK government is not prepared for a major Artificial Intelligence incident and recommends corrective steps. Kanishka Narayan, the UK’s Artificial Intelligence minister, told interviewers that the country could outpace the United States if it builds public trust and agency, though observers such as Imogen Parker warn that political pressure risks encouraging techno-solutionism. The Department for Science, Innovation and Technology has also announced AI Growth Labs, a regulatory sandbox intended to help firms test and scale while exploring where rules might be updated to accelerate adoption.

Across Europe and beyond, debates continue about priorities and approaches. Academics Cosmina Dorobantu and Helen Margetts argue for integrating social science into Artificial Intelligence development to keep people central to model design. Critics including Frederike Kaltheuner question whether diverting €1bn from existing budgets is the right move. Discussion of global governance notes a spreading set of influences beyond the so-called Brussels effect, with commentators pointing to emerging Beijing and Delhi effects shaping regulation in regions such as Africa.

The policy conversation also includes calls for realism and pragmatic industry responses. Eryk Salvaggio critiques elements of the safety community for focusing on imagined threats, while Adam Thierer outlines the political economy challenges of regulating general-purpose systems. Anthropic has published a blog on economic impacts and policy responses such as reskilling and compute taxes. Industry voices, including Alexandru Voica, urge focus on trustworthy workflows and international standards, and firms like Cohere are expanding public policy teams in the UK to engage with the evolving landscape.

66

Impact Score

Artificial intelligence detects suicide risk missed by standard assessments

Researchers at Touro University report that an Artificial intelligence tool using large language models detected signals of perceived suicide risk that standard multiple-choice assessments missed. The study applied Claude 3.5 Sonnet to audio interview responses and compared model outputs with participants’ self-rated likelihood of attempting suicide.

Artificial Intelligence breakthrough: eye scans spot chronic disease early

Monash-led researchers will use Artificial Intelligence and retinal imaging to build a foundational model that detects systemic diseases such as cardiovascular and chronic kidney disease from simple eye scans. The project brings together the Digital Health Cooperative Research Centre and Optain Health and will analyse de-identified, linked longitudinal data from hundreds of thousands of participants.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.