How is artificial intelligence regulated globally and in financial services?

This explainer summarises how artificial intelligence regulation is evolving worldwide and the implications for financial services, noting divergent national approaches, international principles and the UK’s regulatory initiatives.

Artificial intelligence is already used across financial services for risk assessment, fraud detection and customer service, but its use in a regulated sector raises distinct operational and compliance risks. The article highlights two core risks: imperfect input or output quality can produce inaccurate risk assessments and losses, and use of artificial intelligence can contravene existing rules and prompt regulatory sanctions. Because financial regulation is generally national, firms face differing expectations and enforcement priorities across jurisdictions.

The UK’s approach is currently pro-innovation while regulators such as the Financial Conduct Authority monitor developments. The UK Government has launched a call for evidence on the proposed UK artificial intelligence Growth Lab, a regulatory incubator where innovators could run supervised pilot schemes and produce real-world evidence for regulators. The consultation flags concerns about explainability in advice models and cites research suggesting some models may outperform human advisers on certain tasks. The piece warns that increased national rulemaking risks creating extra compliance hurdles and stresses the need for cross-border clarity, calling out data localisation policies as a potential barrier to scalable artificial intelligence systems.

At the international level, the OECD artificial intelligence Principles (adopted May 2019 and updated May 2024) provide a non-binding framework focused on inclusive growth, human rights and democratic values, transparency and explainability, robustness and safety, and accountability. The OECD also recommends investing in research, fostering digital ecosystems, ensuring supportive policy environments, building human capacity and pursuing international cooperation. The G20 and G7 have endorsed aligned principles: the G20 reaffirms the OECD aims, while the G7’s Hiroshima process in October 2023 produced a Code of Conduct targeted at advanced and generative artificial intelligence systems. The Council of Europe’s September 2024 convention emphasises human dignity and safe development.

The article concludes that, while multilateral principles exist, national interests and differing regulatory styles-from principles-based to prescriptive regimes-are driving divergence. For financial services, that divergence creates practical challenges for compliance and cross-border products. Firms therefore need regulatory clarity and consistent expectations across jurisdictions as artificial intelligence evolves.

65

Impact Score

AMD Instinct MI350 platform for Artificial Intelligence and high-performance computing on GIGABYTE servers

The AMD Instinct MI350 Series, launched in June 2025, brings 4th Gen AMD CDNA architecture and TSMC 3nm process to data center workloads, with 288 GB HBM3E and up to 8 TB/s memory bandwidth. GIGABYTE pairs these accelerators and the MI300 family with 8-GPU UBB servers, direct liquid cooling options, and ROCm 7.0 software support for large-scale Artificial Intelligence and high-performance computing deployments.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.