UK watchdogs warned financial system is underprepared for major artificial intelligence shocks

A Treasury select committee has warned that UK regulators are taking too passive an approach to artificial intelligence in financial services, leaving consumers and markets exposed to potential systemic shocks. MPs are urging targeted stress tests, clearer rules and tighter oversight of critical technology providers.

An influential Treasury select committee has warned that the UK financial system “may not be prepared enough” for a major incident linked to artificial intelligence, after an inquiry found regulators are relying too heavily on existing rules and a “wait and see” approach. Committee chairwoman Dame Meg Hillier said that based on the evidence she had seen, she did not feel confident the system was ready for a major artificial intelligence related event, and called for public financial institutions to take a more proactive stance. The committee concluded that while artificial intelligence can improve the speed of services and strengthen cyber defences, current oversight does not adequately address emerging systemic and consumer risks.

MPs said they had received a “significant volume of evidence” about artificial intelligence related risks to financial services customers, including opaque automated decisions in credit and insurance, the possibility of financial exclusion for disadvantaged groups, and the spread of misleading unregulated advice from artificial intelligence powered search tools. Evidence to the inquiry suggested that artificial intelligence driven trading could amplify herding behaviour in markets and, in a worst case scenario, contribute to a financial crisis, while the technology could also increase the volume and sophistication of cyber attacks. The committee highlighted that UK firms are heavily dependent on a small group of United States technology companies for artificial intelligence and cloud infrastructure, which it said raised concerns about concentration, resilience and oversight.

The report noted that the UK currently has no artificial intelligence specific legislation or artificial intelligence specific financial regulation, with the Financial Conduct Authority and the Bank of England supervising its use through general rules on consumer protection, market integrity and financial stability. The committee warned that this high level framework leaves firms with “little practical clarity” on how to apply existing obligations to artificial intelligence, creating uncertainty and potential risks to consumers and the wider system. It urged the Bank of England and the Financial Conduct Authority to run dedicated stress tests for artificial intelligence driven shocks, and called on the Financial Conduct Authority to publish detailed guidance by the end of this year setting out how consumer protection rules apply and who should be accountable for any harm. The committee also pressed the government to use the Critical Third Parties Regime to designate key artificial intelligence and cloud providers for closer scrutiny, and to ensure that rapid innovation is matched by safeguards.

Evidence received by the committee indicates more than 75% of UK financial services firms are now using artificial intelligence, with adoption particularly high among insurers and international banks, and the technology being deployed for tasks such as automating administration, processing insurance claims and credit assessments. The report acknowledged new supervisory tools, including the Financial Conduct Authority’s artificial intelligence live testing service and an enhanced sandbox for firms without their own infrastructure, as well as plans for a joint statutory code of practice with the Information Commissioner’s Office on automated decision making. However, many industry and academic witnesses described the watchdog’s approach as reactive and said fears about liability for consumer harm were having a “chilling effect” on more advanced uses. In responses, the Bank of England and the Treasury said they welcomed the committee’s findings and stressed their commitment to balancing risk management with innovation, while announcing two unpaid “champions” from Starling Bank and Lloyds Banking Group to help accelerate safe adoption in financial services from January 20 2026.

68

Impact Score

Nvidia targets 2026 launch for Windows on arm notebook

Nvidia is outlining a product roadmap that extends its current N1 and N1X platforms toward next-generation N2 and N2X systems, as it prepares a Windows on arm notebook launch targeted for 2026. The effort reflects the company’s broader push from data centers into personal and edge artificial intelligence computing.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.