Paradigms of AI Risk Management, part I: Introduction
AI legislation and regulation debates are moving fast. What can we learn from bank supervision and securities regulation? A multi-part series from Wharton professors Peter Conti-Brown & Kevin Werbach
Post #1: What Do AI Safety and Bank Supervision Have to Do with One Another?
Scientific and commercial innovations in artificial intelligence could be among the most exciting developments in generations. Following just behind are efforts to form legislative and regulatory frameworks to monitor and control risks that AI may pose to society. In this multipart series, Wharton scholars Peter Conti-Brown and Kevin Werbach draw on their respective fields – financial regulation and bank supervision for Conti-Brown, technology policy for Werbach – to understand what parallels and perpendiculars can be drawn from the rich history and practice of financial risk management and the new beginnings of AI risk management.
In the beginning…
In the immediate aftermath of the Great Depression, as Franklin Roosevelt took the dais on inauguration day, March 4, 1933, the zeitgeist was in equal parts despair at the world that had crashed and burned and hope in what might arise from the ashes. The nation’s banking system was on “holiday” the euphemistic term for the fact that the entire system had failed and customers could not access their bank accounts until further notice. “The only thing we have to fear is fear itself,” Roosevelt thundered in his inaugural.
The country believed him. What followed that fateful day was a reinvention of much of financial regulation. It was the creation of modern federal securities laws, a field that was largely (if halfheartedly) occupied by the states until that time. Federal banking law, which had struggled from its inception in the Civil War and Progressive Era, was also invigorated by the strengthening of the national banking system, central banking, and the creation of federal deposit insurance. In the 90 years since that Big Bang of federal financial law reform, the new layers of meaning – the creation of consumer financial protection, anti-money laundering, financial stability regulation, modern central banking practices, and much more – were added to that point of origin.
What is AI Risk Management?
With the revolutionary advances and commercialization of artificial intelligence, we may be facing a similar new beginning. There is (as yet) no moment of cataclysm. Yet pronouncements about AI’s impending transformation of society and reconstitution of business alternate with predictions about its catastrophic dangers. AI could cause all manner of harms, from individual injuries to discrimination to economic disruptions to mass casualty events to, according to some, the very end of humanity. The decisions made in recent months and in the months to come – in Sacramento, in Brussels, in Beijing, in Washington DC – will reverberate for decades to come.
Reverberation, however, is a neutral description. Will the AI governance structures being adopted today become like the New Deal framework that has endured (more or less) since then? Or more like the Revolutionary or Civil War or Progressive Era financial regulatory models that have largely been abandoned?
To put the question differently, as we evaluate AI risk management frameworks, are we fully informed of the paradigms of risk regulation already in place in other areas of society that overlap extensively with the same concerns in AI?
We think that current discussions about AI risk management frameworks are insufficiently attuned to other domains. The temptation is to regard a distinctively innovative set of technologies such as AI as requiring a correspondingly idiosyncratic set of regulatory and legislative practices. This is folly. While it is true that policymakers will need to pay specific attention to the specific risks that AI poses, it is also true that the temptation toward wheel reinvention has sometimes dominated these discussions.
Adding wheels, not reinventing them
This blog series – part of a larger project – will apply cognate risk management frameworks from another domain to that of AI. We will focus on financial regulation in the US, the way that both capital markets and banks are regulated and supervised in a variety of ways.
The exercise is one of analogic reasoning. We seek to define and explore risk conceptually in a way that will be recognizable in both fields and then address how policymakers – in politics, industry, Congress, and regulatory bodies – have regulated risk in finance.
The end result, we hope, will be both synthetic and accretive. The synthesis will be to organize a large history of financial regulation to understand the basic paradigms of risk regulation that have prevailed in various contexts. There is no wide consensus on how banking risk should be regulated. One need only look at the recent, epic debates about bank capital that are still underway to understand the tension. That said, partisans and policymakers widely agree on the ends and tools of financial regulation, even if they disagree on how those tools should be deployed to reach those ends.
We are not the first to take up the question of paradigmatic analogy in AI risk management, including consideration of financial governance institutions as models. Other analogies – to the approval of medical devices or nuclear safety, for example – may indeed be instructive. We concentrate on financial regulation both to draw on our expertise as business school scholars, and because we believe the comparison has not been sufficiently explored to date. There has been much discussion of how banks and other financial institutions should engage in internal governance of their AI systems, building on model risk management obligations imposed after the Global Financial Crisis of 2008. And AI can itself be used for managing risks in financial services, such as fraud or money laundering.
What we can learn from bank supervision and securities regulation
Our focus is different. We take a detailed look at the history and operation of bank supervision and related forms of financial regulation, and then explore their mapping onto the particular set of concerns being expressed in the context of AI risk management. From our initial conversations with experts on both sides, we believe this exercise will be enlightening for both. Our exploration will help identify what could work – or not – in AI risk management. It will also shed light on hidden assumptions, limitations, and opportunities within the domain of financial regulation. This series will unfold as follows. In subsequent weeks, we will describe the state of play of paradigmatic approaches to AI risk management, summarizing approaches that sound in tort and product liability, human rights and antidiscrimination, existential risk management, self-regulatory risk management, and internet regulation. Subsequently, we will describe in separate posts the regulatory-supervisory toolkit in banking and securities regulation; the role of disclosure and confidentiality, including where those principles are in tension; the challenge of charter-focused supervision; and the role of federalism in determining the appropriate scope of regulation.
The effort throughout is not to prescribe one specific mode of risk management that will always be applicable to AI. It is to note the lessons learned – for better and worse – about the public-private system of financial risk management that has evolved fitfully in the United States over the past 160 years. We are at the beginning of the beginning in thinking through the superstructure of AI safety. Our effort will be to burrow into helpful and not helpful analogues in banking and finance that have experienced invention, cataclysm, and reinvention in ways that have direct bearing on the AI discussions ahead.