The Bank of England has released a summary of three roundtables on artificial intelligence and machine learning that explored how banks and insurers are deploying new technologies and what is constraining responsible adoption. The sessions, held in late 2025 with challenger and larger United Kingdom focused banks, global systemically important banks and insurers, were designed to understand how adoption pressures and risk issues differ by business model rather than to signal imminent artificial intelligence specific prudential rules. Participants across sectors generally supported the Prudential Regulation Authority’s principles and outcomes based approach, pointing to Supervisory Statement SS1/23 on model risk management as a workable anchor for artificial intelligence governance, provided frameworks are applied proportionately and flexibly.
Most firms said they did not yet see a strong case for detailed artificial intelligence tailored rules or new guidance and showed little appetite for a Bank of England or Prudential Regulation Authority led artificial intelligence sandbox, instead referencing the Financial Conduct Authority’s “Supercharged Sandbox” and “AI Live Testing” as sufficient venues to trial tools. A prominent theme was the caution of risk and control functions, which participants said can slow deployment pipelines because of scarce specialist skills and the difficulty of evidencing compliance with supervisory expectations in a robust, repeatable way. Model validation concerns were central, with several arguing that traditional validation approaches may not scale as generative artificial intelligence and more autonomous agentic systems become more common, making full traceability of inputs to outputs and clear human in the loop oversight harder to maintain.
In response, some firms argued that risk management should put greater emphasis on practical testing, monitoring and outcome based guardrails across broader artificial intelligence systems, and called for supervisors to share more observations on good and bad practice or convene experts to define evolving best practice. Internationally active institutions highlighted the operational burden of navigating divergent regimes, citing differences between the United Kingdom approach, the United States approach, including SR 11-7 on model risk management, and the EU AI Act, and warned that fragmentation can add cost, slow adoption and hinder consistent scaling of use cases, prompting calls for greater coordination. Participants also raised third party challenges where vendors are unfamiliar with regulated firms’ compliance obligations and suggested the Bank of England could convene financial institutions and technology providers to set baseline expectations, especially as embedded models become harder to substitute in agentic deployments. Data protection requirements, including the need to complete Data Protection Impact Assessments in certain cases, and emerging data sovereignty and location rules were cited as further constraints, while insurers pointed to less frequent customer interactions and more limited customer level data than banks, which could limit near term potential for highly personalised artificial intelligence enabled products. The note, published on 16 February 2026, confirms discussions were held under the Chatham House Rule with observers from the Financial Conduct Authority and HM Treasury.
