Robotic hand tracking stick market
VoxEU Column Finance and Fintech Financial Markets Financial Regulation and Banking

How the financial authorities can respond to AI threats to financial stability

Artificial intelligence can act to either stabilise the financial system or to increase the frequency and severity of financial crises. This second column in a two-part series argues that the way things turn out may depend on how the financial authorities choose to engage with AI. The authorities are at a considerable disadvantage because private-sector financial institutions have access to expertise, superior computational resources, and, increasingly, better data. The best way for the authorities to respond to AI is to develop their own AI engines, set up AI-to-AI links, implement automatic standing facilities, and make use of public-private partnerships.

Artificial intelligence (AI) has considerable potential to increase the severity, frequency, and intensity of financial crises. We discussed this last week on VoxEU in a column titled “AI financial crises” (Danielsson and Uthemann 2024a). But AI can also stabilise the financial system. It just depends on how the authorities engage with it.

In Norvig and Russell’s (2021) classification, we see AI as a “rational maximising agent”. This definition resonates with the typical economic analyses of financial stability. What distinguishes AI from purely statistical modelling is that it not only uses quantitative data to provide numerical advice; it also applies goal-driven learning to train itself with qualitative and quantitative data, providing advice and even making decisions.

One of the most important tasks – and not an easy one – for the financial authorities, and central banks in particular, is to prevent and contain financial crises. Systemic financial crises are very damaging and cost the large economies trillions of dollars. The macroprudential authorities have an increasingly difficult job because the complexity of the financial system keeps increasing.

If the authorities choose to use AI, they will find it of considerable help because it excels at processing vast amounts of data and handling complexity. AI could unambiguously aid the authorities at a micro-level, but struggle in the macro domain.

The authorities find engaging with AI difficult. They have to monitor and regulate private AI while identifying systemic risk and managing crises that could develop quicker and end up being more intense than the ones we have seen before. If they are to remain relevant overseers of the financial system, the authorities must not only regulate private-sector AI but also harness it for their own mission.

Not surprisingly, many authorities have studied AI. These include the IMF (Comunale and Manera 2024), the Bank for International Settlements (Aldasoro et al. 2024, Kiarelly et al. 2024) and ECB (Moufakkir 2023, Leitner et al. 2024). However, most published work from the authorities focuses on conduct and microprudential concerns rather than financial stability and crises.

Compared to the private sector, the authorities are at a considerable disadvantage, and this is exacerbated by AI. Private-sector financial institutions have access to more expertise, superior computational resources, and, increasingly, better data. AI engines are protected by intellectual property and fed with proprietary data – both often out of reach of the authorities.

This disparity makes it difficult for the authorities to monitor, understand, and counteract the threat posed by AI. In a worst-case scenario, it could embolden market participants to pursue increasingly aggressive tactics, knowing that the likelihood of regulatory intervention is low.

Responding to AI: Four options

Fortunately, the authorities have several good options for responding to AI, as we discussed in Danielsson and Uthemann (2024b). They could use triggered standing facilities, implement their own financial system AI, set up AI-to-AI links, and develop public-private partnerships.

1. Standing facilities

Because of how quickly AI reacts, the discretionary intervention facilities that are preferred by central banks might be too slow in a crisis.

Instead, central banks might have to implement standing facilities with predetermined rules that allow for an immediate triggered response to stress. Such facilities could have the side benefit of ruling out some crises caused by the private sector coordinating on run equilibria. If AI knows central banks will intervene when prices drop by a certain amount, the engines will not coordinate on strategies that are only profitable if prices drop more. An example is how short-term interest rate announcements are credible because market participants know central banks can and will intervene. Thus, it becomes a self-fulfilling prophecy, even without central banks actually intervening in the money markets.

Would such an automatic programmed response to stress need to be non-transparent to prevent gaming and, hence, moral hazard? Not necessarily. Transparency can help prevent undesirable behaviour; we already have many examples of how well-designed transparent facilities promote stability. If one can eliminate the worst-case scenarios by preventing private-sector AI from coordinating with them, strategic complementarities will be reduced. Also, if the intervention rule prevents bad equilibria, the market participants will not need to call on the facility in the first place, keeping moral hazard low. The downside is that, if poorly designed, such pre-announced facilities will allow gaming and hence increase moral hazard.

2. Financial system AI engines

The financial authorities can develop their own AI engines to monitor the financial system directly. Let’s suppose the authorities can overcome the legal and political difficulties of data sharing. In that case, they can leverage the considerable amount of confidential data they have access to and so obtain a comprehensive view of the financial system.

3. AI-to-AI links

One way to take advantage of the authority AI engines is to develop AI-to-AI communication frameworks. This will allow authority AI engines to communicate directly with those of other authorities and of the private sector. The technological requirement would be to develop a communication standard – an application programming interface or API. This is a set of rules and standards that allow computer systems using different technologies to communicate with one another securely.

Such a set-up would bring several benefits. It would facilitate the regulation of private-sector AI by helping the authorities to monitor and benchmark private-sector AI directly against predefined regulatory standards and best practices.  AI-to-AI communication links would be valuable for financial stability applications such as stress testing.

When a crisis happens, the overseers of the resolution process could task the authority AI to leverage the AI-to-AI links to run simulations of the alternative crisis responses, such as liquidity injections, forbearance or bailouts, allowing regulators to make more informed decisions.

If perceived as competent and credible, the mere presence of such an arrangement might act as a stabilising force in a crisis.

The authorities need to have the response in place before the next stress event occurs. That means making the necessary investment in computers, data, human capital, and all the legal and sovereignty issues that will arise.

4. Public-private partnerships

The authorities need access to AI engines that match the speed and complexity of private-sector AI. It seems unlikely they will end up having their own in-house designed engines as that would require considerable public investment and reorganisation of the way the authorities operate. Instead, a more likely outcome is the type of public-private sector partnerships that have already become common in financial regulations, like in credit risk analytics, fraud detection, anti-money laundering, and risk management.

Such partnerships come with their downsides. The problem of risk monoculture due to oligopolistic  AI market structure would be of real concern. Furthermore, they might prevent the authorities from collecting information about decision-making processes. Private sector firms also prefer to keep technology proprietary and not disclose it, even to the authorities.  However, that might not be as big a drawback as it appears. Evaluating engines with AI-to-AI benchmarking might not need access to the underlying technology, only how it responds in particular cases, which then can be implemented by the AI-to-AI API links.

Dealing with the challenges

Although there is no technological reason that prevents the authorities from setting up their own AI engines and implementing AI-to-AI links with the current AI technology, they face several practical challenges in implementing the options above.

The first is data and sovereignty issues. The authorities already struggle with data access, which seems to be getting worse because technological firms own and protect data and measurement processes with intellectual property. Also, the authorities are reluctant to share confidential data with one another.

The second issue for the authorities is how to deal with AI that causes excessive risk. A policy response that has been suggested is to suspend such AI, using a ‘kill switch’ akin to trading suspensions in flash crashes. We suspect that might not be as viable as the authorities think because it might not be clear how the system will function if a key engine is turned off.

Conclusion

If the use of AI in the financial system grows rapidly, it should increase the robustness and efficiency of financial services delivery at a much lower cost than is currently the case. However, it could also bring new threats to financial stability.

The financial authorities are at a crossroads. If they are too conservative in reacting to AI, there is considerable potential that AI could get embedded in the private system without adequate oversight. The consequence might be an increase in the intensity, frequency, and severity of financial crises.

However, the increased use of AI might stabilise the system, reducing the likelihood of damaging financial crises. This is likely to happen if the authorities take a proactive stance and engage with AI: they can develop their own AI engines to assess the system by leveraging public-private partnerships, and using those establish AI-to-AI communication links to benchmark AI. This will allow them to do stress tests, simulate responses. Finally, the speed of AI crises suggests the importance of triggered standing facilities.

Authors’ note: Any opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the Bank of Canada.

References

Aldasoro, I, L Gambacorta, A Korinek, V Shreeti and M Stein (2024), Intelligent financial system: how ai is transforming finance, Technical report, BIS.

Danielsson, J and A Uthemann (2024a), "AI financial crises", VoxEU.org, 25 July. 

Danielsson, J and A Uthemann (2024b), “Artificial intelligence and financial crises”, available at SSRN.

Leitner, G, J Singh, A van der Kraaij, and B. Zsámboki (2024), “The rise of artificial intelligence: benefits and risks for financial stability”, ECB Financial Stability Review.

Comunale, M and A Manera (2024), "The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions", IMF Working Paper.

Kiarelly, D, G de Araujo, S Doerr, L Gambacorta and B Tissot (2024), Artificial intelligence in central banking, Technical report, BIS.

Moufakkir, M (2023), Careful embrace: AI and the ECB, Technical report, ECB.

Norvig, P and S Russell (2021), Artificial Intelligence: A Modern Approach, Pearson.