Artificial intelligence is reshaping finance by increasing productivity, improving decision making and risk management, and enabling more efficient asset allocation across markets and institutions. These advances promise more accurate pricing, faster execution, and more tailored financial services. At the same time, the growing dependence on complex models and data-driven automation is changing how risks propagate through the financial system, raising concerns that benefits could be accompanied by new channels for instability.
The analysis focuses on how Artificial Intelligence interacts with established sources of systemic risk, including liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage. It argues that five core features of Artificial Intelligence are especially important: concentration and high entry barriers in model development and data, model uniformity as institutions converge on similar techniques, monitoring challenges created by opacity and complexity, overreliance and excessive trust in automated outputs, and the speed at which algorithms operate and markets react. Each of these characteristics can amplify multiple vulnerabilities at once, increasing the likelihood that shocks spread quickly and in correlated ways across firms and markets.
In response, the authors call for a comprehensive policy mix to contain Artificial Intelligence driven systemic risks while preserving efficiency gains. They highlight the need for competition policies that address concentration and entry barriers, alongside consumer protection frameworks that mitigate overreliance and unfair outcomes. They also urge recalibration of prudential regulation and supervision, including capital and liquidity rules, circuit breakers, disclosure requirements, and insider trading standards, to align them with the speed, scale, and opacity of Artificial Intelligence tools in finance. A central warning is that, “In view of the potential systemic risks, it is essential to implement policies to ensure safe use of AI. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of systemic risk.”
