Artificial intelligence and systemic risk in finance

Artificial intelligence is transforming financial markets with productivity and risk management gains, but its core features may also magnify systemic vulnerabilities and outpace regulators’ oversight.

Artificial intelligence is reshaping finance by increasing productivity, improving decision making and risk management, and enabling more efficient asset allocation across markets and institutions. These advances promise more accurate pricing, faster execution, and more tailored financial services. At the same time, the growing dependence on complex models and data-driven automation is changing how risks propagate through the financial system, raising concerns that benefits could be accompanied by new channels for instability.

The analysis focuses on how Artificial Intelligence interacts with established sources of systemic risk, including liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage. It argues that five core features of Artificial Intelligence are especially important: concentration and high entry barriers in model development and data, model uniformity as institutions converge on similar techniques, monitoring challenges created by opacity and complexity, overreliance and excessive trust in automated outputs, and the speed at which algorithms operate and markets react. Each of these characteristics can amplify multiple vulnerabilities at once, increasing the likelihood that shocks spread quickly and in correlated ways across firms and markets.

In response, the authors call for a comprehensive policy mix to contain Artificial Intelligence driven systemic risks while preserving efficiency gains. They highlight the need for competition policies that address concentration and entry barriers, alongside consumer protection frameworks that mitigate overreliance and unfair outcomes. They also urge recalibration of prudential regulation and supervision, including capital and liquidity rules, circuit breakers, disclosure requirements, and insider trading standards, to align them with the speed, scale, and opacity of Artificial Intelligence tools in finance. A central warning is that, “In view of the potential systemic risks, it is essential to implement policies to ensure safe use of AI. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of systemic risk.”

70

Impact Score

SoftBank and AMD validate GPU partitioning for artificial intelligence workloads

SoftBank and AMD are jointly validating a GPU partitioning system for AMD Instinct accelerators that allows a single chip to run multiple artificial intelligence workloads in parallel, tuned to each model’s resource needs. The work targets more efficient use of next generation artificial intelligence infrastructure amid manufacturing delays for AMD’s next Instinct generation.

Meta and Nvidia partner on large scale artificial intelligence infrastructure

Meta and Nvidia have signed a multiyear, multigenerational deal to deploy millions of Blackwell and Rubin GPUs in new hyperscale data centers optimized for training and inference workloads. The partnership brings Nvidia CPUs, GPUs and Spectrum-X networking into Meta’s long term artificial intelligence infrastructure roadmap.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.