Strategies for Regulating Artificial Intelligence - Guest Post, Cary Coglianese
As part of our series on AI Risk Management and Financial Regulation, our colleague Cary Coglianese builds on his work to describe some options for regulating AI risks.
We publish this post as part of our series joining insights from AI and financial regulation (see here, here, and here). This post comes to us from our colleague, Cary Coglianese.
The fundamental challenge in regulating AI risk stems from the extreme heterogeneity that exists across AI tools, uses, and concomitant harms. The rapid emergence of foundation models and generative AI tools such as ChatGPT and other large language models only exacerbates this challenge. These multifunctional AI tools—the digital equivalent to Swiss Army knives—can perform virtually countless functions, including drafting legal documents, generating computer code, and creating digital art, to name just a few. This multifunctionality—combined with the rapid evolution of these models over time—makes traditional, so-called prescriptive regulatory approaches generally unsuitable for managing the risks from these new technological advances.
Prescriptive regulation, which attempts to mandate exactly how technologies should be designed or operated, simply will not work as a general strategy for governing multifunctional AI. The technology is too complex, too varied in its applications, and evolving too rapidly for regulators to specify detailed requirements that would effectively manage risks while preserving benefits. Instead, regulators need to embrace more flexible approaches.
Four main flexible regulatory strategies show promise for governing multifunctional AI, although each comes with important advantages and disadvantages.
Performance standards represent one approach. They would require AI systems to meet specific benchmarks rather than prescribing how to achieve them. These have been heralded in other settings as allowing for more cost-effective regulatory control, as they allow risks to be maintained below a stipulated level but give firms flexibility in choosing how to achieve the required levels. Although performance standards are potentially useful for narrow, single- function AI applications—such as medical diagnostic tools—they become quickly unrealistic when applied to foundation models that can serve countless different purposes. The regulatory task of defining meaningful performance metrics across a full range of varied use cases will be infeasible.
Information disclosure requirements offer a second path forward. These regulations could mandate that companies reveal when AI systems are being used (existence disclosure) or that they report specific performance metrics (performance disclosure). The latter type of information disclosure standards will generally suffer the same limitations as performance standards. Although disclosure of any kind may help users make informed choices and potentially encourage better corporate behavior, the effectiveness of information disclosure as a regulatory strategy depends on whether the disclosed information is actually useful and comprehensible to its intended audience. Technical disclosures about AI systems are likely to prove too complex for at least ordinary consumers to evaluate meaningfully.
Ex post liability represents a third strategy, allowing for compensation and accountability after harms occur. This approach has the advantage of regulators not needing to anticipate every possible risk in advance. However, establishing causation and ex post responsibility can be quite challenging with AI systems, as multiple parties—from model developers to end users—might plausibly contribute to harmful outcomes. This is complicated by the black-box nature of AI algorithms themselves. Furthermore, ex post liability may prove less than adequate for prevention purposes, as it only imposes consequences after harms have occurred.
Management-based regulation—the fourth main strategy—emerges as the most practical, promising general approach for governing multifunctional AI. Under this strategy, companies are required to develop internal processes to identify and manage risks, implement protective measures, and continuously monitor and improve their risk management practices. This approach—which has been adopted by the EU for high-risk uses of AI—leverages private sector expertise while providing flexibility to adapt to emerging challenges. Companies can tailor their risk management approaches to their specific AI applications while regulators focus on ensuring robust oversight processes rather than prescribing detailed technical requirements.
In reality, AI governance across distinct domains—e.g., financial regulation, auto safety regulation, medical device regulation—will likely call for some combination of multiple regulatory strategies, with management-based regulation providing a vital overlay. It will be imperative for regulators to ensure that firms’ managers and engineers maintain continuous vigilance over their development and use of AI tools. And the same holds true for regulators themselves, who will need to be vigilant and agile. Effective regulatory governance of AI will demand that regulators have access to sufficient resources—fiscal, technological, and human.
Regulators must see themselves as overseers of dynamic AI ecosystems rather than simply enforcers of static rules. Regular monitoring and swift response capabilities will be essential, as will meaningful engagement with various stakeholders—including AI developers, academic researchers, civil society organizations, and end users. The latter group can prove vital in identifying various risks, biases, and limitations that arise when AI tools are applied. Feedback loops and opportunities for reporting information should be considered as part of any responsible management-based regulation.
Although regulating multifunctional AI presents daunting challenges, workable approaches do exist. Success will require moving beyond a conception of regulatory governance as involving a search for a rigid, one-size-fits-all solution toward a vision of a more flexible, multifaceted governance of human oversight. Just as societies have developed varied rules for managing different uses of multifunctional physical tools, such as knives, they will need to devise diverse and flexible regulatory strategies for managing varied applications of AI and their attendant heterogeneous risks. The key lies in building regulatory systems that can evolve alongside the technology while maintaining robust oversight that protects the public while also allowing society to benefit from new technological innovation.
Selected Additional Readings:
Cary Coglianese & Colton R. Crum, “Regulating Multifunctionality,” in Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt, eds., The Oxford Handbook on the Foundations and Regulation of Generative AI (forthcoming).
Cary Coglianese & Colton R. Crum, “Leashes, Not Guardrails: A Management-Based Approach to Artificial Intelligence Risk Regulation,” Risk Analysis (forthcoming).
Cary Coglianese & Nabil Shaikh, “Management-Based Oversight of the Automated State: Emerging Standards for AI Impact Assessment and Auditing in the Public Sector,” in Emad Yaghmaei, et al., eds., Global Perspectives on AI Impact Assessment (forthcoming)
Cary Coglianese, “Regulating Machine Learning: The Challenge of Heterogeneity,” TechReg Chronicle 17-27 (February 2023)
Cary Coglianese & Alicia Lai, “Algorithm vs. Algorithm,” Duke Law Journal 72:1281-1340 (2022)
https://johnshanewayofthepoet.substack.com/p/song-of-the-monstrous-grid-john-shane?r=4max28