Introduction
The majority of vehicles on California’s vast network of roads make considerable use of information technology.
Although most are not yet capable of anything approaching fully autonomous driving, already it is possible to witness something like the following scene. A driver steering one vehicle spies a newer car’s reflection in the rear-view mirror. The newer car appears to be driving itself. Whatever the official limits on that sleek vehicle’s capability,
the person in its driver’s seat seems to have no interaction with the steering wheel when the driver of the older vehicle begins observing. Instead, the person in the driver’s seat of that car is engaged in a mix of what seems like personal grooming, texting, and distracted glancing out the side window. Almost subconsciously, the driver of the older car realizes he is tweaking his own driving to test (within the limits of what’s safe, of course) the way the algorithm appears to be driving the car behind him. If the older car slowed down or applied the brakes, the newer car behind would slow—gently if the front car decelerated slowly, and somewhat more suddenly if the driver of the older car applied the brakes more unexpectedly. Then the driver of the older vehicle realizes that if he stops for traffic and waits for the car in front to advance a bit before quickly accelerating, the autopiloted car stays behind and opens up a gap in traffic, tempting drivers in other lanes to switch into the opened-up spot. But if the driver of the older car speeds up more gradually, the newer vehicle stays close to the older car. So the older car’s driver could effectively tighten the invisible coupling between his car and the more autonomous one or break it based on the rate of acceleration. Finally, when the lane next to the older car is clear, the driver realizes that a slight deviation in how centered his car is in the original lane achieves something significant—it seems to make the autopilot in the newer car behind disengage, forcing that driver to take over the steering wheel.
Even these few seconds of reciprocal steering and autopiloting on a California freeway tell a story: Simple choices can shape complex norms about how we rely on our machine infrastructure. More than simply emphasizing the importance of intricate algorithmic details affecting vehicular behavior, these stories also underscore how much humans are witnessing the steady integration of manufactured intelligence into everyday social life.
No doubt a human driver can feel like the Oscar Isaac character dancing with the robot in the film Ex Machina.
Sometimes this means that humans will be shaped in subtle but potentially enormously consequential ways by artificial intelligence (AI) techniques affecting the flow of information, the distance between cars, or the timing of persuasive messages, for example.
Yet when we share the road, and indeed the world, with artificially intelligent systems, the direction of influence can also run in the opposite direction: Influencing the performance of an AI system need not be a very elaborate, high barrier-to-entry activity. The aforementioned driver’s heavily analog, twentieth-century methods did fine in controlling, to some extent, a complex amalgam of software and hardware that is almost certainly also susceptible to—if surely somewhat tightly secured against—more sophisticated hacking.
Indeed, the co-evolution of human and artificial intelligence—what we could call our dance with machines—is well on its way to becoming routine. The dance continues as we navigate artificial chatbots, insurance transactions, court avatars, earnest advertising appeals, and borders.
Lurking in the background is law, along with the assumptions and norms it helps sustain. That this dance is playing out in the world’s most economically complex and geopolitically powerful common law jurisdiction—the United States, still the preeminent hub for innovation in AI
—makes it appropriate to explore what relevance the common law and AI hold for each other. In fact, even accounts of American law that foreground the administrative state retain a prominent if not starring role for the system of incremental adjudication associated with American common law. Indeed, the roads, buildings, and corners of cyberspace where humans are increasingly interacting with manufactured intelligence also reveal another development of considerable importance for lawyers and judges: AI is becoming an increasingly relevant development for the American system of incremental, common law adjudication. The design of a vehicle with some capacity for autonomous driving can spur contract and tort disputes with qualities both familiar and novel.
Even decades ago, American courts were sometimes already facing legal questions foreshadowing dilemmas one can reasonably expect the present century to serve up about the balance of human and machine decisionmaking. A court in Arizona, for example, was forced to resolve whether punitive damages could be imposed on a transportation company, which failed to use information technology to track the work of its drivers and limit them from working excessive hours.
Just as courts once had to translate common law concepts like chattel trespass to cyberspace,
new legal disputes—turning on subtle distinctions revealed by digital evidence of neural-network evolution that bear on a party’s responsibility for causing harm, for example—will proliferate as reliance on AI becomes more common. The reasonableness of a driver’s decision to rely on a vehicle’s autonomous capacity, or an organization’s choice to delegate a complicated health or safety question to a neural network, will almost certainly spur a new crop of disputes in American courtrooms.
Given the speed and importance of these developments, my purpose here is to begin surveying the fertile terrain where the American system of common law adjudication intersects with AI. American society depends both on technology and the role of incremental common law adjudication in the legal system. The growing importance of AI gives us reason to consider how AI, common law, and society may affect each other. In particular, such an exploration should take account of the common law’s role as a default backstop in social and economic life in the United States and a number of other major economies. Even beyond the strict doctrinal limits of torts, property, and contracts, common law ideas tend to set the terms for conversations among elites and even the larger public about the way social and economic interactions ordinarily occur, and how public agencies should analyze the problems—ranging from financial regulation to occupational safety—they are designed to mitigate.
Beyond serving as a default means of structuring interactions and a framework for analyzing social and economic life, the common law also offers an apt metaphor for how law, society, and technological change affect each other over the drawn-out process of applying broad social commitments to specific fact patterns. So it is no surprise that any intellectually candid conversation about law and AI—particularly in the United States—must be to a considerable extent a conversation about the relationship between AI and the common law.
After defining some terms and setting the stage, I offer three preliminary ideas. First, our society already regulates AI through a backstop arising from the common law—and rightly so. Second, some degree of explainability that is well-calibrated to foster societal deliberation about consequential decisions is foundational to making any AI involved in human decisionmaking compatible with tort and other common law doctrines. At least one version of this ideal that merits attention could be termed “relational non-arbitrariness” to foreground the importance of buttressing—through both the common law and public law—society’s capacity to deliberate about, and revise, the process through which it makes the choices that matter most. Finally, common law doctrines have room to integrate societal considerations involving organizational realities and institutional capacity, and concerns about matters such as the erosion of human knowledge that would be risky to ignore.