Artificial Intelligence in financial services: emerging global norms

A new IRSG report argues that Artificial Intelligence is reshaping financial services by amplifying existing risks rather than creating new ones, and calls for interoperable, principles-based supervision instead of new hard global rules. The analysis highlights broad alignment on high-level international principles but diverging national approaches to implementation.

The International Regulatory Strategy Group report on Artificial Intelligence in financial services identifies a growing global convergence around high-level principles from the OECD, G20 and G7, including human-centricity, transparency, robustness, safety and accountability. The report notes that while these foundational concepts are widely shared, jurisdictions differ significantly in how they translate them into practice, ranging from detailed, prescriptive rulebooks to more flexible, outcomes-focused frameworks and voluntary guidance developed collaboratively with industry.

According to the IRSG’s analysis, Artificial Intelligence is described as a general-purpose technology that can magnify model risk, data governance challenges, third-party concentration risk and cyber threats, particularly for generative Artificial Intelligence, but does not introduce wholly new financial-sector risks. The report therefore argues that supervisory responses should be rooted in existing technology-neutral rules, with interoperable and principles-based oversight rather than the creation of new hard global rules on Artificial Intelligence. The authors warn that given Artificial Intelligence’s rapid evolution, highly rigid international rulebooks risk becoming obsolete and may fail to support innovation.

IRSG Council chair Farmida Bi stresses that as Artificial Intelligence transforms financial services, regulatory strategies must support both innovation and resilience, with coherence but not rigidity, shared taxonomies and supervision through existing frameworks. The report highlights that most jurisdictions draw on OECD, G20 and G7 principles but implement them through different models, citing the European Union’s prescriptive regime, the United Kingdom’s non-statutory, outcomes-focused supervision, and Singapore’s voluntary, co-created guidance. It further cautions that data localisation and extra-territorial measures can fragment markets and impede responsible innovation, and it calls for international cooperation among regulators, policymakers and standard setters to align taxonomies, indicators and supervisory tools so that Artificial Intelligence can be deployed safely and responsibly across borders.

52

Impact Score

Polis signs regulatory review and Artificial Intelligence bills

Gov. Jared Polis opened his post-session bill signings by approving two measures aimed at improving Colorado’s business climate. One mandates regular reviews of state regulations, while the other rewrites the state’s Artificial Intelligence rules around transparency, human review, and enforcement.

Illinois lawmakers weigh Artificial Intelligence rules

Illinois lawmakers are considering a broad set of Artificial Intelligence proposals focused on consumer protection, privacy, minors, and workplace discrimination. Business groups and technology advocates are pushing for a lighter, more uniform approach as questions linger over federal authority and state enforcement.

Samsung winds down chip lines before 18-day strike

Samsung is moving its semiconductor factories into emergency management mode ahead of an 18-day worker strike. The slowdown could disrupt global DRAM and NAND Flash supply and add pressure to an already tight memory market.

Musk and Altman clash over credibility in final trial week

The final week of the Musk v. Altman trial centered on whether Elon Musk or Sam Altman is more credible, and whether OpenAI abandoned its nonprofit mission. Jurors are now weighing competing claims over control, restructuring, and Artificial Intelligence safety.

Artificial Intelligence model learns to say it does not know

South Korean researchers developed a training method that helps Artificial Intelligence models recognize when they lack knowledge instead of responding with misplaced confidence. The approach aims to reduce hallucinations and improve reliability in areas such as autonomous driving and medicine.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.