Paradigms of AI Risk Management, part III: The state of AI Law circa November 2024
Part III in my series with Kevin Werbach on lessons (and anti-lessons) from bank supervision for AI Risk Management. This week, what exactly is the state of AI Risk Law?
In our previous posts (here and here), we discussed our ambitions in providing a roadmap—and some of the roads to take (and avoid)—from bank supervision and financial regulation to AI risk management. We also provided a brief history of the path of AI risk management that preceded the major developments of the last two years.
In this post, we discuss the ways that governments throughout the world have sought to grapple with the constantly changing world of AI commercial developments. They represent disparate approaches based on some overlapping, yet distinct theories of what AI risk is, as well as the appropriate division in regulation between government and industry action.
The Chinese Model
First out of the gate in adopting enforceable and targeted regulation of AI was China. It has adopted three AI-specific laws to date: on recommender systems, on synthetic content generation, and on generative AI services. There are many aspects of these laws that could easily be adopted in the US, and perhaps should be, including requirements that companies take reasonable steps to mitigate risks of models, standardized evaluation metrics, and an “algorithm registry” so regulators can better understand what is in the market. Other parts of the Chinese regime, as one might expect, are not consistent with Western values and law. These include requirements that generative AI systems promote “core socialist values” and severe speech restrictions so that AI chatbots cannot generate forbidden content (such as information about the Tiananmen Square massacre of 1989).
In the US
To date, the US has not adopted any significant federal legislation governing private-sector AI deployment. However, the Biden Administration issued a major framework for AI governance, the Blueprint for an AI Bill of Rights, in 2022. It followed this blueprint with a comprehensive AI Risk Management Framework from the National Institute of Standards and Technology (NIST) in early 2023 and a massive Executive Order (EO) on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October 2023.
The first two of these Biden American efforts are not enforceable against the private sector. The Executive Order is, to a limited extent, in two ways. First, it imposes detailed requirements on federal agencies for their development and procurement of AI, with the expectation that this will spill over to influence the private sector providers who sell to them. There are also a number of standards development initiatives, building on the earlier NIST work, and provisions for addressing other AI policy concerns such as workforce readiness, immigration policy, export controls, and public availability of AI computing capacity for research.
Second, the EO cites the Defense Production Act, a Cold War law granting the President limited powers over private companies without Congressional authorization, to impose requirements on “dual use foundation models.” These are the most high-performing frontier models, requiring a level of computing capacity to train that exceeds any model on the market today. It is such models that potentially pose the greatest risk of catastrophic harm. Under this authority, the Executive Order mandates firms developing them to provide extensive information to the US government prior to release.
As Kevin explains on his own Substack, the incoming Trump Administration has pledged to repeal the EO, on the grounds that it represents an unreasonable impediment to American AI innovation. There is always some uncertainty in predicting how a new Administration will act. In this case, however, a number of prominent tech industry executives and investors were major Trump supporters and are close to incoming Vice President J.D. Vance. Removing restraints on AI development is a priority for them, partly as a matter of national security and competition with China. Whether that will trump concerns about AI safety and the need for oversight of how US-developed models might be exploited by China remains to be seen.
The view from Congress and the states
Although the US Congress has not yet acted, several AI regulation bills have been introduced, often with bipartisan support. Several focus on specific dangers, such as copyright violations and deepfakes. Others are broader. The Algorithmic Accountability Act would mandate that those deploying automated decision-making systems in areas such as hiring where they could cause significant harm must conduct risk audits and provide them to a government agency. The proposal from Senators Hawley and Blumenthal would create a licensing regime for high-risk frontier AI models, and a liability regime for AI-based harms including a private right of action for damages.
The most significant developments in the US, however, have been in the states. There are several hundred AI bills pending in state legislatures or municipal governments, with dozens already adopted. Among the most significant are the New York City Local Law 144, and the Colorado AI Act. Local Law 144 mandates that companies using automated systems, including AI, for hiring, publish an adverse impact analysis to provide data that could be used to assess whether the system produces biased outcomes on the basis of race, gender, or a combination. The Colorado AI Act requires that developers and deployers of AI use reasonable care to prevent algorithmic discrimination through AI systems in settings such as education, employment, financial services, healthcare, or housing. This includes requirements for algorithmic impact statements, consumer disclosures, and incident reporting.
While these two laws seem to have significant coverage, they are limited to considerations of algorithmic bias, not the broader and downstream harms potentially included in an AI risk management framework. Moreover, they do not apply every time AI is incorporated into a decision. The Colorado legislation requires that AI be a “substantial factor” in outcomes, while the New York City law has a stricter requirement that AI be “used to substantially assist or replace discretionary decision making.” Companies are already taking the position that virtually any residual human involvement absolves them from Local Law 144 obligations.
The final major state law deserving of serious discussion is California Senate Bill 1047. This legislation would have imposed a series of obligations on developers of high-performance foundation models requiring a large amount of computing and at least $100 million to train, which pose a risk of causing mass casualties or damages of over $500 million. Those developers would be required to take reasonable care in testing and documenting their systems to mitigate harms, with certifications filed with the state and subject to enforcement by the state Attorney General. SB 1047 became a source of massive controversy when many in the AI industry and research community argued that it would chill AI innovation and undermine the AI industry in California. The bill was ultimately vetoed by California Governor Gavin Newsom.
The European Model
The most extensive effort so far to implement AI risk regulation is the European Union’s AI Act, which was adopted in early 2024 and will go into effect in stages through 2026. It differs in conception of risk importantly from its most natural predecessor, the General Data Protection Regulation (GDPR) referenced in our last post. The GDPR focuses heavily on the protection of human rights in an increasingly data-driven, digital economy. The AI Act is framed instead in terms of product safety. The goal is to ensure AI systems are trustworthy, which means identifying and addressing AI risks. The AI Act attempts to do so horizontally, focusing not on particular industries but on tiers of use categories based on their riskiness. (In financial regulation, this is the problem of “activities” versus “entities” regulation, see here for an overview.)
In the top tier are AI systems imposing unacceptable risks, such as manipulation, indiscriminate biometric scanning, and emotion detection in the workplace or education, which are summarily banned. (What deserves inclusion on this list is not obvious, and reflects a political accommodation between members of the European Parliament and law enforcement.) In the next tier are high-risk uses, such as the deployment of AI systems in healthcare, education, employment, and law enforcement, which are subject to extensive pre-deployment testing, disclosure, and post-deployment monitoring obligations. The lowest tier covers most other AI uses that pose some material risk of harm and are subject to basic transparency obligations, such as disclosing when a user is interacting with an AI system. Anything else is not subject to AI-specific regulation.
The EU began the process of developing the AI Act with a high-level working group in 2019 and issued the first draft of the regulation in 2021 before generative AI (GenAI) foundational models such as OpenAI’s GPT-3 had been deployed in public. The tiered system was ill-suited to govern the risks that GenAI posed. GenAI “foundation models” are pretrained for general capabilities and can be incorporated by users or deployers into virtually any use case. That easy incorporation raises significant novel issues such as copyright protection for data scraped from the internet or potentially infringing new works created among many others. A new section for General Purpose AI (GPAI) models was therefore added to the AI Act at the last minute, ultimately including only relatively limited disclosure and governance requirements. There is also a section for systemic-risk frontier models that might pose catastrophic dangers, with provisions mostly in line with the US Executive Order. And post-adoption, there is now an extensive process of developing standards and industry codes of practice that will specify the details of compliance.
Conclusion
These different government approaches reveal several trends. First, this area of law and policy is moving very quickly. The speed of change is appropriate given the speed of commercialization, but it also poses an important risk of its own. The more comprehensive and rigid the conception of risk management that the AI acts impose, the more likely those pieces of legislation will become quickly dated and cast in amber a risk equilibrium relevant in the early 2020s but irrelevant just a few years later. Second, there is no universal model of risk management that these various efforts have adopted. The closest ethos would probably be one of consumer protection, but this is too broad to be useful. The danger is that government actors will know which groups they are trying to protect but have much less sense of how to go about doing it.
The view from bank supervision and financial regulation can be helpful to answer both questions, both the problem of who to protect and the problem of how to protect. We will turn to those issues in the next installment.