AI and Financial Regulation: Cascades, Crashes, and Causation - guest post from Jonathan Iwry
As part of our series on AI Risk Management and Financial Regulation, Jonathan Iwry explores questions of responsibility and automation in a world of artificial intelligence.
On the morning of April 7, 2025, a false headline about a potential pause in the President’s tariffs began circulating on social media. Within minutes, it was picked up by a widely followed account, repeated on CNBC, and relayed by Reuters. Automated trading systems reacted immediately, driving the stock market up by $2.4 trillion in ten minutes before the gains were fully erased following a White House denial minutes later. No single actor caused the event. An anonymous post, informal amplification, mistaken attribution, and algorithmic trading behavior combined to produce one of the most volatile swings in recent market history.
As AI becomes more tightly integrated into our institutional infrastructure, these kinds of cascading events are likely to grow in frequency and magnitude. Financial markets have long relied on automation, but the rapid adoption of AI-driven trading is introducing major new risks to market stability—in particular, the risk of inducing herd behavior and amplifying market volatility through network effects. While it is clear that these incidents must be addressed and that some form of responsibility must attach, the question of who—or what—caused the harm is often surprisingly difficult to answer.
Any hope of managing these risks will turn largely on our ability to hold financial actors accountable for the damage caused by their AI systems. This raises questions about a deceptively simple but critical aspect of liability that has thus far received relatively little attention from regulators and scholars focused on AI’s financial risks: causation. When can an autonomous trading algorithm be said to have caused (or not caused) market volatility? When multiple AI systems interact in complex and nontransparent ways, will courts be up to the task of assessing each defendant’s contributions and allocating liability proportionally?
Events like this illustrate a critical challenge for AI governance. While not every disruption will involve a breach of duty or a defective AI system, the complexity, opacity, and interactivity of AI technologies make it vastly harder to detect wrongful conduct, trace causal chains, and assign responsibility. If courts and regulators continue relying on a causation standard designed for simpler, more transparent streams of influence, accountability for AI-driven harms might prove difficult even when genuine defects, negligence, or manipulation occur. Financial markets illustrate how AI can create systemic risks. An effective liability-based AI governance strategy will struggle to manage those risks unless the current causation doctrine can be adapted in order to handle complex, multi-agent causal relationships.
The Financial Risks of AI
AI’s power in finance stems from its ability to analyze vast datasets and execute trades at lightning speed. But this strength is also a vulnerability. When AI models are relied on by many parties across the market, such as when multiple institutions use similar AI models, herd behavior emerges—everyone rushing to buy or sell the same assets at once. This can distort prices, inflate bubbles, and cause cascade failures where AI-driven trading amplifies a downturn into a crash.
This is not hypothetical. In the 2010 Flash Crash, high-frequency trading algorithms initiated a cascade of rapid-fire trades that wiped nearly $1 trillion off the market in minutes before partially rebounding. In 2016, algorithmic trading was blamed for an overnight 6% drop in the British pound. In 2024, a typo in Lyft’s earnings report—mistakenly projecting a 500-basis-point profitability increase instead of 50 basis points—sent trading algorithms into a frenzy, driving the stock price up 60% in after-hours trading. And April’s tariff headline incident, though not necessarily tied to any breach of duty, showed how AI’s drastic responses to financial signals (including its inability to respond thoughtfully to misinformation) can produce sudden, massive, and potentially destabilizing market activity.
Widespread reliance on AI models becomes even riskier with autonomous, “agentic” AI trading bots. If multiple bots rely on the same market indicators, they might all sell off simultaneously when momentum shifts downward and then reenter at the same time when prices begin to recover. This exaggerated price action increases volatility and destabilizes markets. While such feedback loops are well known among regulators and scholars of finance, AI-driven trading runs the risk of making them more frequent, extreme, and damaging—and makes it harder to regulate them and assign liability.
The Limits of But-for Causation
The predominant standard for causation in the U.S., the but-for test, asks courts to consider whether the harm in question would have happened without the defendant’s wrongful conduct. The test has long been considered the gold standard for causation in U.S. law, but it faces known weaknesses that are almost certain to materialize in AI-related scenarios. When breaches of duty do occur—such as through defective AI models, reckless trading algorithms, or negligent oversight—the legal system must still establish causation between wrongful conduct and harm. But traditional causation doctrines, particularly the but-for test, are poorly suited to complex AI-driven environments.
Imagine five AI systems, each used by a different firm, simultaneously selling off a stock based on the same market signal, resulting in a crash. Crucially, any one of those AI systems alone might have been sufficient to tip the market. Statisticians and tort scholars recognize this as a case of causal “overdetermination”: multiple sufficient causes, none strictly necessary. Under a strict but-for analysis, no single AI passes the test, as the crash might have happened even if any one system had not been involved. As a result, the court would be forced to accept the absurd conclusion that none of the firms could be held responsible—or could even be properly described as having caused the crash to begin with.
Another potential defense is the argument that other AI systems acted as “intervening causes,” breaking the causal chain. Suppose one firm’s AI initiates a sell-off, triggering other AIs to join in, amplifying the harm. The first firm could argue that the subsequent actions by other firms’ AI systems were independent factors that preclude the first firm from being found responsible. Technically, this would fall under a different prong of the causation analysis (confusingly called “legal causation” in contrast to “actual causation”), but the outcome is the same: a finding of no responsibility for the actors involved.
There is a possible solution. The “substantial factor” test (also called the “contributing factor” test) is a widely used alternative to but-for causation. To find causation, this test does not require that a cause was necessary for the outcome—only that it played a nontrivial role in the process leading to that outcome. Under this contributing factor test, all five AIs in the earlier example could be held liable if they contributed in some meaningful way to the market crash. Courts already use this test in environmental law, antitrust cases, and employment discrimination claims—domains where multiple actors often contribute to harm in complex ways that would evade but-for causation. The higher sensitivity of the substantial factor test makes it better suited to the dynamics of densely networked systems such as AI models and financial markets.
Unfortunately, the substantial factor test is facing judicial pushback. A wave of U.S. Supreme Court and appellate cases over the past decade has served to reinforce the but-for test as the default rule across the legal system. Even in antidiscrimination law, where the substantial factor test had long played a central role, the but-for test is now on the ascendancy (despite objections that this will undermine efforts to combat discrimination by making it harder for employees to prove they were discriminated against). The concern is that this shift toward the but-for test will benefit negligent developers and deployers of AI systems by making it harder to hold them liable for market harms.
Evidence of Causation
Even when duty and breach have been found and the analysis of causation would be straightforward, the practical difficulty of gathering evidence in AI-related cases can make it hard to determine whose AI systems actually contributed to a given harm. The “black box” nature of AI models means that their decision-making processes are often opaque even to their developers and deployers. Courts might struggle to determine why a given AI executed trades and whether those trades meaningfully contributed to a crash. (Indeed, to the extent that defendants can produce evidence as to causation or related aspects of their AI system’s behavior, the sheer complexity of the computational activity and data involved could enable defendants to “flood the zone” by overwhelming the court and other parties with documents and data.)
One way to address the evidentiary issues involving AI-related causation would be to establish a rebuttable presumption of causation under certain conditions. For example, if an adverse market event follows synchronized AI trading by multiple parties using similar models, courts could presume all parties to have participated in causing the outcome unless proven otherwise. This would shift the burden onto those best positioned to explain their AI’s role (or lack thereof) in the causal stream.
Conclusion
Regulators are rightly drawing their attention to the risks AI poses for market stability. Some have called for promoting AI model diversity and monitoring monoculture effects involving AI. And some are considering the EU AI Act’s risk-based approach as a source of inspiration. But recent experience from financial markets suggests a deeper lesson for AI governance more broadly: without address the current causation doctrine, there is a serious chance that responsibility will slip through the cracks—that our legal and regulatory systems will fail to hold wrongdoers accountable even when duty and breach are clear, and that other attempts at doctrinal reform will be rendered obsolete.
What sets AI apart from other technological revolutions is that, for the first time in history, we are dealing with a technology that even its creators struggle to understand and control. As a result, its ongoing integration into system as sensitive, complex, and consequential as financial markets—inevitable though that might be—introduces unprecedented risks. Promoting AI accountability across the legal and regulatory landscape will require new measures to help lawmakers and regulators understand how these systems contribute to and interact in producing high-risk outcomes. And we can expect that these challenges will require us to rethink how the law approaches the hidden complexities of cause and effect.
Your opening paragraph brought to mind this statement made by Ken Burns in his 2016 commencement address at Stanford: "The sense of commonwealth, of shared sacrifice, of trust, so much a part of American life, is eroding fast, spurred along and amplified by an amoral Internet that permits a lie to circle the globe three times before the truth can get started." Sadly, these days the truth, once started, is often ignored anyway.