AI Risk Management - A History
The second post in a series exploring the application of bank supervisory systems to AI risk management
Two weeks ago we introduced our project on applying principles of risk management from finance and bank supervision to the world of AI. This post is the second in the series and will explore the history of various AI risk management approaches that predate the recent increased focus on the threats and opportunities that AI presents.
The State of Risk Management in the AI Context
Framing AI regulation, supervision, and/or liability regimes as “risk management” is itself a relatively new phenomenon that owes something to their prior use in finance. One of the first uses of the term in an AI-relevant context was the Federal Reserve and OCC’s Model Risk Management guidance for banks (SR 11-7) issued in 2011. Those discussions, however, were quite limited. They spoke more about how to conceptualize algorithm risk within banking and not the breadth of internal and external risks from complex models and systems in the contemporary AI context.
Yet today, the most significant AI legislation, whether from the European Union or California, often styles itself a form of risk management. To understand the emergence of AI risk management movement, we need to start with the history of AI. (With apologies to Peter, the actual historian in this dialogue.)
A Quick History of AI, and the Prehistory AI Regulation
AI has been around for at least six decades as a computer science research field, with predecessors before there were such things as programmable computers. The name “artificial intelligence” took hold after a series of seminars at Dartmouth in 1956. In business, AI has a similarly long history and one often entangled in finance and financial risk. Credit scores, one of the first applications of data analytics and a direct precursor to today’s AI-driven systems for finance, marketing, hiring, and other fields, similarly date to the 1950s and were in use with sophisticated large-scale algorithms by the 1970s. In non-finance applications, the US Postal Service began large-scale optical character recognition of addresses on envelopes in the 1980s, using increasingly powerful machine learning techniques to accomplish this task. The current AI boom dates from the rapid improvement in deep learning techniques for uses such as predictive modeling, computer vision, and machine translation starting around 2010.
The experience of AI dominance in business, legal, and political discourse is, of course, much more recent. In November 2022, when Open AI launched its first public iteration of Chat GPT, interest in and concern around AI, the risks it imposed, the opportunities it offered, reached a fever pitch. But the AI boom happened in the absence of regulatory AI risk management frameworks.
Discussions of what these frameworks should be, however, draw on a much deeper context around information, data, privacy, and the management of risks associated with each. In the European Union, for example, the 1995 Data Protection Directive and the far more extensive 2018 General Data Protection Regulation imposed stringent requirements on how companies could collect and use personal information, in order to mitigate various risks and abuses. However, the framing was still around data protection and human rights rather than risk management. Article 22, covering “automated individual decision-making", which seemingly would address AI, had a confusing and vague set of requirements, which only applied when humans were fully out of the loop, resulting in virtually no real world enforcement.
The Turn to AI Risk Regulation
While the regulatory and public policy world wasn’t describing AI as a risk, others were. A loose collection of engineers, philosophers, and other thinkers began in the early 2000s to speculate about the dangers that AI might pose. Those dangers could include everything up to and including the end of humanity, if a rogue super-intelligent AI turned on its makers. Initially these were essentially thought experiments based on hypothetical projections of AI capabilities, as there was no immediate prospect of real-world systems reaching that level.
With the improvement of AI performance and scaling up of adoption in the 2010s, however, the network involved in these discussions grew, and their speculations before more detailed. The extraordinary growth of internet-based digital platforms around this time, as well as the effects of upheavals such as 9/11, the Global Financial Crisis, and the Covid-19 pandemic created an environment where catastrophes didn’t sound so far-fetched. And there were sophisticated communities with the connectivity and financial resources to work through and spread their ideas about AI risk. Some members of those communities were involved in the creation of what are today some of the most important frontier AI model developers, such as OpenAI and Anthropic.
When the launch of ChatGPT in November 2022 marked the starting point for the current AI explosion, therefore, those at the center of promoting generative AI’s revolutionary benefits were also highlighting the catastrophic dangers it also posed.
The focus on the existential risks and transformational benefits at the heart of this tension poses an interesting conundrum. Does it reflect enlightened awareness by corporate leaders deeply concerned about doing good for humanity? Or, more cynically, is the focus on extreme outcomes a savvy calculus that will reorient regulatory energy away from more immediate, broader, and more extensive regulation for AI developers in the workaday risks that such systems could impose? These are open questions. Regardless, the conversation around AI regulation has oriented significantly in recent years toward “AI safety,” risk management, and approaches derived from established liability regimes for risky products.
The quest, today, is to understand the nature of AI risk and prepare regulatory structures, if any, that are best suited to that risk. To be clear, though, not all - or even most - AI regulation involves novel legal/regulatory regimes or AI-targeted statutes. AI is protean. It can be applied to any industry and context, just like other general purpose technologies such as computers and the internet. If one uses AI to engage in discriminatory hiring in violation of the Civil Rights Act, the conduct is actionable without a separate AI hiring discrimination law. If one commits fraud or theft using an AI-generated deepfake, neither the Federal Trade Commission nor law enforcement needs to find a specific provision prohibiting such conduct with AI. Existing regulatory structures requiring approval for potentially dangerous items, most notably the Food and Drug Administration’s regime for authorizing medical devices, have already proven able to encompass AI-embedded equipment.
However, most participants acknowledge that the application of existing law to AI risk is insufficient. There are significant gaps and limitations in using general purpose laws and regulations for AI.
In our next post, we will describe how various jurisdictions – in China, the US, and Europe, especially – have sought to fill in those gaps.