Artificial intelligence and systemic risk in finance

Artificial intelligence is transforming financial markets with productivity and risk management gains, but its core features may also magnify systemic vulnerabilities and outpace regulators’ oversight.

Artificial intelligence is reshaping finance by increasing productivity, improving decision making and risk management, and enabling more efficient asset allocation across markets and institutions. These advances promise more accurate pricing, faster execution, and more tailored financial services. At the same time, the growing dependence on complex models and data-driven automation is changing how risks propagate through the financial system, raising concerns that benefits could be accompanied by new channels for instability.

The analysis focuses on how Artificial Intelligence interacts with established sources of systemic risk, including liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage. It argues that five core features of Artificial Intelligence are especially important: concentration and high entry barriers in model development and data, model uniformity as institutions converge on similar techniques, monitoring challenges created by opacity and complexity, overreliance and excessive trust in automated outputs, and the speed at which algorithms operate and markets react. Each of these characteristics can amplify multiple vulnerabilities at once, increasing the likelihood that shocks spread quickly and in correlated ways across firms and markets.

In response, the authors call for a comprehensive policy mix to contain Artificial Intelligence driven systemic risks while preserving efficiency gains. They highlight the need for competition policies that address concentration and entry barriers, alongside consumer protection frameworks that mitigate overreliance and unfair outcomes. They also urge recalibration of prudential regulation and supervision, including capital and liquidity rules, circuit breakers, disclosure requirements, and insider trading standards, to align them with the speed, scale, and opacity of Artificial Intelligence tools in finance. A central warning is that, “In view of the potential systemic risks, it is essential to implement policies to ensure safe use of AI. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of systemic risk.”

70

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.