KEYNOTE

A Common Law for the Age of Artificial Intelligence: Incremental Adjudication, Institutions, and Relational Non-Arbitrariness

Introduction

The majority of vehicles on California’s vast network of roads make con­siderable use of information technology. 1 To some extent, regulators have helped to drive the increasing importance of computing technology in the routine operation of automobiles. See Bill Canis, Cong. Research Serv., R44800, Issues with Federal Motor Vehicle Safety Standards 11–19 (2017), https://fas.org/sgp/crs/misc/R44800.pdf [https://perma.cc/TS69-SHYZ]. Although most are not yet capable of anything approaching fully autonomous driving, already it is possible to witness something like the following scene. A driver steering one vehicle spies a newer car’s reflection in the rear-view mirror. The newer car appears to be driving itself. Whatever the official limits on that sleek vehicle’s capability, 2 See generally David Welch & Elisabeth Behrmann, Who’s Winning the Self-Driving Car Race?, Bloomberg (May 7, 2018), https://www.bloomberg.com/news/features/2018-05-07/who-s-winning-the-self-driving-car-race [https://perma.cc/HMC6-8LEN] (noting that the “road to autonomy is long and exceedingly complicated”); Tesla Autopilot—Review Including Full Self-Driving for 2019, AutoPilot Review, https://www.autopilotreview.com/tesla-autopilot-features-review [https://perma.cc/Z5CX-2CPS] (last visited July 29, 2019) (describing Tesla’s self-driving capabilities). The extent to which a manu­facturer appropriately represents to consumers or regulators the capacity of an auto­pilot function that falls short of full automation capability can raise plenty of legal issues––under contract law, tort law, and consumer protection statutes and regulations. See, e.g., Edvard Pettersson & Dana Hull, Tesla Sued over Fatal Crash Blamed on Autopilot Malfunction, Bloomberg (May 1, 2019), ‌https://www.bloomberg.com/news/articles/2019-05-01/tesla-sued-over-fatal-crash-blamed-on-autopilot-navigation-error [https://perma.cc/VFX7-94UF]. Indeed, it’s far from clear whether a concept such as “full” automation is even viable when functions that humans colloquially bundle into a single category, such as driving, are easily disaggregated into distinct sub-functions that may call for different automation processes or degrees of human interaction, and when consumers routinely use available technologies in ways that fail to correspond to pre­scribed limits. the person in its driver’s seat seems to have no inter­action with the steering wheel when the driver of the older vehicle begins observing. Instead, the person in the driver’s seat of that car is en­gaged in a mix of what seems like personal grooming, texting, and dis­tracted glancing out the side window. Almost subconsciously, the driver of the older car realizes he is tweaking his own driving to test (within the limits of what’s safe, of course) the way the algorithm appears to be driving the car behind him. If the older car slowed down or applied the brakes, the newer car behind would slow—gently if the front car decelerated slowly, and somewhat more suddenly if the driver of the older car applied the brakes more unexpectedly. Then the driver of the older vehicle realizes that if he stops for traffic and waits for the car in front to advance a bit before quickly accelerating, the autopiloted car stays behind and opens up a gap in traffic, tempting drivers in other lanes to switch into the opened-up spot. But if the driver of the older car speeds up more gradually, the newer vehicle stays close to the older car. So the older car’s driver could effectively tighten the invisible coupling between his car and the more autonomous one or break it based on the rate of acceleration. Finally, when the lane next to the older car is clear, the driver realizes that a slight deviation in how centered his car is in the orig­inal lane achieves something significant—it seems to make the autopilot in the newer car behind disengage, forcing that driver to take over the steering wheel.

Even these few seconds of reciprocal steering and autopiloting on a California freeway tell a story: Simple choices can shape complex norms about how we rely on our machine infrastructure. More than simply empha­sizing the importance of intricate algorithmic details affecting vehic­ular behavior, these stories also underscore how much humans are witnessing the steady integration of manufactured intelligence into every­day social life. 3 See Meredith Whittaker et al., AI Now Report 2018, at 10–11 (2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/JL95-7XKH] (describ-ing the variety of settings where people routinely interact with systems displaying characteristics of artificial intelligence, and the broad range of functions performed); Ted Greenwald, What Exactly Is Artificial Intelligence, Anyway?, Wall St. J. (Apr. 30, 2018), https://www.wsj.com/
articles/what-exactly-is-artificial-intelligence-anyway-1525053960 (on file with the Columbia Law Review) (same).
No doubt a human driver can feel like the Oscar Isaac character dancing with the robot in the film Ex Machina. 4 Ex Machina (Film4 & DNA Films 2014). Sometimes this means that humans will be shaped in subtle but potentially enormously consequential ways by artificial intelligence (AI) techniques affecting the flow of information, the distance between cars, or the timing of persua­sive messages, for example. 5 See, e.g., Robert M. Bond et al., A 61-Million-Person Experiment in Social Influence and Political Mobilization, 489 Nature 295 (2012) (finding that randomly as­signed political mobilization Facebook messages influenced Facebook users’ offline politi­cal activity). Yet when we share the road, and indeed the world, with artificially intelligent systems, the direction of influence can also run in the opposite direction: Influencing the performance of an AI system need not be a very elaborate, high barrier-to-entry activity. The aforementioned driver’s heavily analog, twentieth-century methods did fine in controlling, to some extent, a complex amalgam of software and hardware that is almost certainly also susceptible to—if surely somewhat tightly secured against—more sophisticated hacking. 6 Lying somewhere in between sophisticated cybersecurity intrusions and easily deployed human-driven techniques to control AI systems is the use of adversarial attacks to disrupt the expected operations of machine learning systems. See, e.g., Alexey Kurakin, Ian J. Goodfellow & Samy Bengio, Adversarial Machine Learning at Scale 1–2 (2017), https://arxiv.org/pdf/1611.01236.pdf [https://perma.cc/2XBM-UVD2] (“[N]eural networks and many other categories of machine learning models are highly vulnerable to attacks based on small modifications of the input to the model at test time . . . .”). Indeed, the co-evolu­tion of human and artificial intelligence—what we could call our dance with machines—is well on its way to becoming routine. The dance continues as we navigate artificial chatbots, insurance transactions, court avatars, earnest advertising appeals, and borders.

Lurking in the background is law, along with the assumptions and norms it helps sustain. That this dance is playing out in the world’s most economically complex and geopolitically powerful common law jurisdic­tion—the United States, still the preeminent hub for innovation in AI 7 See Sarah O’Meara, China’s Ambitious Quest to Lead the World in AI by 2030, 572 Sci. Am. 427, 428 (2019) (“Most of the world’s leading AI-enabled semiconductor chips are made by US companies such as Nvidia, Intel, Apple, Google and Advanced Micro Devices.”). —makes it appropriate to explore what relevance the common law and AI hold for each other. In fact, even accounts of American law that fore­ground the administrative state retain a prominent if not starring role for the system of incremental adjudication associated with American com­mon law. Indeed, the roads, buildings, and corners of cyberspace where humans are increasingly interacting with manufactured intelligence also reveal another development of considerable importance for lawyers and judges: AI is becoming an increasingly relevant development for the American system of incremental, common law adjudication. The design of a vehicle with some capacity for autonomous driving can spur contract and tort disputes with qualities both familiar and novel. 8 See, e.g., Mark A. Geistfeld, A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation, 105 Calif. L. Rev. 1611, 1632–74 (2017) (discussing manufacturer liability for autonomous vehicle crashes and hacks); Bryant Walker Smith, Automated Driving and Product Liability, 2017 Mich. St. L. Rev. 1, 32–56 (discussing products liability and personal injury litigation in the context of autonomous vehicles); see also Jack Boeglin, The Costs of Self-Driving Cars: Reconciling Freedom and Privacy with Tort Liability in Autonomous Vehicle Regulation, 17 Yale J.L. & Tech. 171, 174–75, 185–201 (2015) (noting the “uncertainty surrounding the complex liability issues for crashes involving [autonomous vehicles], which, in many ways defy the traditional conceptions of fault and agency at play in automobile accidents”). Even decades ago, American courts were sometimes already facing legal questions fore­shadowing dilemmas one can reasonably expect the present century to serve up about the balance of human and machine decisionmaking. A court in Arizona, for example, was forced to resolve whether punitive damages could be imposed on a transportation company, which failed to use information technology to track the work of its drivers and limit them from working excessive hours. 9 See Torres v. N. Am. Van Lines, Inc., 658 P.2d 835, 838–39 (Ariz. Ct. App. 1982). Just as courts once had to translate common law concepts like chattel trespass to cyberspace, 10 See Intel Corp. v. Hamidi, 71 P.3d 296, 308 (Cal. 2003) (declining to find that emails from a former employee to numerous current employees criticizing the company’s employment practices could, despite their unauthorized nature, constitute trespass to chattels). new legal dis­putes—turning on subtle distinctions revealed by digital evidence of neural-network evolution that bear on a party’s responsibility for causing harm, for example—will proliferate as reliance on AI becomes more com­mon. The reasonableness of a driver’s decision to rely on a vehicle’s autonomous capacity, or an organization’s choice to delegate a compli­cated health or safety question to a neural network, will almost certainly spur a new crop of disputes in American courtrooms.

Given the speed and importance of these developments, my purpose here is to begin surveying the fertile terrain where the American system of common law adjudication intersects with AI. American society de­pends both on technology and the role of incremental common law adjudica­tion in the legal system. The growing importance of AI gives us reason to consider how AI, common law, and society may affect each other. In particular, such an exploration should take account of the com­mon law’s role as a default backstop in social and economic life in the United States and a number of other major economies. Even beyond the strict doctrinal limits of torts, property, and contracts, common law ideas tend to set the terms for conversations among elites and even the larger public about the way social and economic interactions ordinarily occur, and how public agencies should analyze the problems—ranging from financial regulation to occupational safety—they are designed to mitigate. 11 See, e.g., Mariano-Florentino Cuéllar, Administrative War, 82 Geo. Wash. L. Rev. 1343, 1439 (2014) (discussing how ideological norms and the common law appeared to buttress each other and reinforced concerns about government ownership of industry during wartime mobilization in the early 1940s). Beyond serving as a default means of structuring interactions and a framework for analyzing social and economic life, the common law also offers an apt metaphor for how law, society, and technological change affect each other over the drawn-out process of applying broad social commitments to specific fact patterns. So it is no surprise that any intellectually candid conversation about law and AI—particularly in the United States—must be to a considerable extent a conversation about the relationship between AI and the common law.

After defining some terms and setting the stage, I offer three prelim­inary ideas. First, our society already regulates AI through a back­stop aris­ing from the common law—and rightly so. Second, some degree of explainability that is well-calibrated to foster societal deliberation about consequential decisions is foundational to making any AI involved in hu­man decisionmaking compatible with tort and other common law doc­trines. At least one version of this ideal that merits attention could be termed “relational non-arbitrariness” to foreground the importance of buttressing—through both the common law and public law—society’s capacity to deliberate about, and revise, the process through which it makes the choices that matter most. Finally, common law doctrines have room to integrate societal considerations involving organizational reali­ties and institutional capacity, and concerns about matters such as the erosion of human knowledge that would be risky to ignore.