Bank of England roundtables expose hurdles to responsible artificial intelligence deployment

United Kingdom banks and insurers told the Bank of England that cautious risk functions, fragmented global rules and data constraints are slowing responsible artificial intelligence adoption, even as existing supervisory frameworks are seen as largely sufficient for now.

The Bank of England has released a summary of three roundtables on artificial intelligence and machine learning that explored how banks and insurers are deploying new technologies and what is constraining responsible adoption. The sessions, held in late 2025 with challenger and larger United Kingdom focused banks, global systemically important banks and insurers, were designed to understand how adoption pressures and risk issues differ by business model rather than to signal imminent artificial intelligence specific prudential rules. Participants across sectors generally supported the Prudential Regulation Authority’s principles and outcomes based approach, pointing to Supervisory Statement SS1/23 on model risk management as a workable anchor for artificial intelligence governance, provided frameworks are applied proportionately and flexibly.

Most firms said they did not yet see a strong case for detailed artificial intelligence tailored rules or new guidance and showed little appetite for a Bank of England or Prudential Regulation Authority led artificial intelligence sandbox, instead referencing the Financial Conduct Authority’s “Supercharged Sandbox” and “AI Live Testing” as sufficient venues to trial tools. A prominent theme was the caution of risk and control functions, which participants said can slow deployment pipelines because of scarce specialist skills and the difficulty of evidencing compliance with supervisory expectations in a robust, repeatable way. Model validation concerns were central, with several arguing that traditional validation approaches may not scale as generative artificial intelligence and more autonomous agentic systems become more common, making full traceability of inputs to outputs and clear human in the loop oversight harder to maintain.

In response, some firms argued that risk management should put greater emphasis on practical testing, monitoring and outcome based guardrails across broader artificial intelligence systems, and called for supervisors to share more observations on good and bad practice or convene experts to define evolving best practice. Internationally active institutions highlighted the operational burden of navigating divergent regimes, citing differences between the United Kingdom approach, the United States approach, including SR 11-7 on model risk management, and the EU AI Act, and warned that fragmentation can add cost, slow adoption and hinder consistent scaling of use cases, prompting calls for greater coordination. Participants also raised third party challenges where vendors are unfamiliar with regulated firms’ compliance obligations and suggested the Bank of England could convene financial institutions and technology providers to set baseline expectations, especially as embedded models become harder to substitute in agentic deployments. Data protection requirements, including the need to complete Data Protection Impact Assessments in certain cases, and emerging data sovereignty and location rules were cited as further constraints, while insurers pointed to less frequent customer interactions and more limited customer level data than banks, which could limit near term potential for highly personalised artificial intelligence enabled products. The note, published on 16 February 2026, confirms discussions were held under the Chatham House Rule with observers from the Financial Conduct Authority and HM Treasury.

52

Impact Score

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.