Artificial intelligence is already used across financial services for risk assessment, fraud detection and customer service, but its use in a regulated sector raises distinct operational and compliance risks. The article highlights two core risks: imperfect input or output quality can produce inaccurate risk assessments and losses, and use of artificial intelligence can contravene existing rules and prompt regulatory sanctions. Because financial regulation is generally national, firms face differing expectations and enforcement priorities across jurisdictions.
The UK’s approach is currently pro-innovation while regulators such as the Financial Conduct Authority monitor developments. The UK Government has launched a call for evidence on the proposed UK artificial intelligence Growth Lab, a regulatory incubator where innovators could run supervised pilot schemes and produce real-world evidence for regulators. The consultation flags concerns about explainability in advice models and cites research suggesting some models may outperform human advisers on certain tasks. The piece warns that increased national rulemaking risks creating extra compliance hurdles and stresses the need for cross-border clarity, calling out data localisation policies as a potential barrier to scalable artificial intelligence systems.
At the international level, the OECD artificial intelligence Principles (adopted May 2019 and updated May 2024) provide a non-binding framework focused on inclusive growth, human rights and democratic values, transparency and explainability, robustness and safety, and accountability. The OECD also recommends investing in research, fostering digital ecosystems, ensuring supportive policy environments, building human capacity and pursuing international cooperation. The G20 and G7 have endorsed aligned principles: the G20 reaffirms the OECD aims, while the G7’s Hiroshima process in October 2023 produced a Code of Conduct targeted at advanced and generative artificial intelligence systems. The Council of Europe’s September 2024 convention emphasises human dignity and safe development.
The article concludes that, while multilateral principles exist, national interests and differing regulatory styles-from principles-based to prescriptive regimes-are driving divergence. For financial services, that divergence creates practical challenges for compliance and cross-border products. Firms therefore need regulatory clarity and consistent expectations across jurisdictions as artificial intelligence evolves.
