MPs warn UK financial sector faces serious harm from lack of Artificial Intelligence rules

A treasury committee report warns that a wait-and-see stance on Artificial Intelligence in finance is leaving UK consumers and markets exposed to serious risks, from fraud to potential financial crises. MPs call for clearer accountability, stress tests, and practical guidance for firms already heavily using the technology.

UK consumers and the financial system are being exposed to “serious harm” because the government and key regulators have failed to adequately address the risks posed by artificial intelligence, according to a new report from the treasury committee. The cross-party group of MPs criticises ministers, the Bank of England and the Financial Conduct Authority for adopting a “wait-and-see” approach to Artificial Intelligence use across the financial sector, despite growing concerns about how the technology could disadvantage vulnerable customers or even help trigger a financial crisis. The report argues that it is the responsibility of authorities to ensure safety mechanisms keep pace with the rapid deployment of Artificial Intelligence in core financial operations.

The committee highlights that more than 75% of City firms now use Artificial Intelligence, with insurers and international banks among the biggest adopters, and that it is being deployed to automate administrative work and handle core tasks such as processing insurance claims and assessing credit-worthiness. However, the UK has failed to develop any specific laws or regulations to govern the use of Artificial Intelligence, with the Financial Conduct Authority and Bank of England insisting existing general rules are sufficient, leaving firms to interpret how current guidelines apply. MPs warn that this patchwork approach could put consumers and financial stability at risk, fuel a lack of transparency in decision making, and leave it unclear whether data providers, technology developers or financial institutions are responsible when things go wrong.

The report also says Artificial Intelligence raises the likelihood of fraud and the spread of unregulated and misleading financial advice, while increasing cybersecurity vulnerabilities and concentrating reliance on a small number of United States technology providers for critical services. It warns that widespread use of similar Artificial Intelligence systems could amplify herd behaviour during economic shocks, “risking a financial crisis”. In response, MPs urge regulators to introduce new stress tests to gauge the City’s resilience to Artificial Intelligence-driven market shocks and press the Financial Conduct Authority to publish “practical guidance” by the end of the year, clarifying how consumer protection rules apply and who is accountable if harm occurs. The Financial Conduct Authority, Treasury and Bank of England all say they will consider the findings, with officials insisting they are working to balance managing Artificial Intelligence risks with unlocking its potential and to strengthen oversight as the technology evolves.

58

Impact Score

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.