How will we assess the morality of decisions made by artificial intelligence—and will our judgments be swayed by what the law says? Focusing on a moral dilemma in which a driverless car chooses to sacrifice its passenger to save more people, this study offers evidence that our moral intuitions can be influenced by the presence of the law.
The full text of this essay may be found by clicking the PDF link to the left.
Introduction
As a tree suddenly collapses into the road just ahead of your driverless car, your trusty artificial intelligence pilot Charlie swerves to avoid the danger. But now the car is heading straight toward two bicyclists on the side of the road, and there’s no way to stop completely before hitting them. The only other option is to swerve again, crashing the car hard into a deep ditch, risking terrible injury to the passenger—you.
Maybe you think you’ve heard this one. But the question here isn’t what Charlie should do. Charlie already knows what it will do. Rather, the question for us humans is this: If Charlie chooses to sacrifice you, throwing your car into the ditch to save the bicyclists from harm, how will we judge that decision?
1
This framing of the thought experiment focuses our inquiry on the psychology of public reactions to decisions made by artificial intelligence (AI). (What an autonomous system should do when faced with such dilemmas is the focus of much other work in the current “moral machine” discourse. See infra note 8.) In saying that Charlie already knows what it will do, I mean only what is obvious—that by the time such an accident happens, the AI pilot will have already internalized (through machine learning, old-fashioned programming, or other means) some way of making the decision. It is not positing that Charlie’s decision in the story is the normatively better one. Nor does this study presuppose that public opinion surveys should be used for developing normative principles for guiding lawmakers, AI creators, or the AI systems themselves. Rather, its premise is that our collective moral intuitions—including how we react after hearing about an accident in which the self-driving car had to choose whom to sacrifice—might affect public approval of any such law or normative policy.
Is it morally permissible, or even morally required, because it saves more people?
2
This particular version of the dilemma, posing a trade-off between passengers and outsiders, is pervasive in the public discourse about driverless cars. See, e.g., Karen Kaplan, Ethical Dilemma on Four Wheels: How to Decide When Your Self-Driving Car Should Kill You, L.A. Times (June 23, 2016), http://www.latimes.com/science/sciencenow/la-sci-sn-autonomous-cars-ethics-20160623-snap-story.html [https://perma.cc/ZSK7-SP8G]; John Markoff, Should Your Driverless Car Hit a Pedestrian to Save Your Life?, N.Y. Times (June 23, 2016), http://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html (on file with the Columbia Law Review); George Musser, Survey Polls the World: Should a Self-Driving Car Save Passengers, or Kids in the Road?, Sci. Am. (Oct. 24, 2018), https://www.scientificamerican.com/article/survey-polls-the-world-should-a-self-driving-car-save-passengers-or-kids-in-the-road/ [https://perma.cc/P57B-EQZV].
Or is it morally prohibited, because Charlie’s job is to protect its own passengers?
3
In 2016, Mercedes-Benz found itself in a public relations mess as news stories trumpeted how a company representative had let slip that the company’s future self-driving cars would prioritize the car’s passengers over the lives of pedestrians in a situation where those are the only two options. Michael Taylor, Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians, Car & Driver (Oct. 7, 2016), https://www.caranddriver.com/news/a15344706/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/ [https://perma.cc/KJB8-PY4Z]. What the executive actually said might be interpreted to mean that guaranteed avoidance of injury would take priority over uncertain avoidance of injury, and that this preference would tend to favor protecting the passenger in the car. See id. (quoting the executive as saying: “If you know you can save at least one person, at least save that one. Save the one in the car . . . . If all you know for sure is that one death can be prevented, then that’s your first priority”). Regardless, Daimler responded with a press release denying any such favoritism. Press Release, Daimler, Daimler Clarifies: Neither Programmers nor Automated Systems Are Entitled to Weigh the Value of Human Lives (Oct. 18, 2016), http://media.daimler.com/
marsMediaSite/en/instance/ko/Daimler-clarifies-Neither-programmers-nor-automated-systems-.xhtml?oid=14131869 [https://perma.cc/CZ2G-YSVF].
And what about the law—will our moral judgments be influenced by knowing what the law says? 4 In prior work using a similar survey experiment, I have presented evidence that in a standard trolley problem dilemma (involving a human decisionmaker who can turn a runaway train), one’s moral intuitions about such a sacrifice can be influenced by knowing what the law says. Bert I. Huang, Law and Moral Dilemmas, 130 Harv. L. Rev. 659, 680–95 (2016). In addition to the most obvious difference between the two studies (an autonomous vehicle versus a human decisionmaker), the nature of their dilemmas also differs: The earlier study sets a moral duty to save more lives against a moral prohibition from harming an innocent bystander (and thus engages intuitions mapping onto such classic distinctions as act versus omission, or intended versus side effects); in contrast, this study sets a moral duty to save more lives against a moral duty to protect the passenger (and by design seeks to blur the classic distinctions). See infra section I.A. What if the law says that Charlie must minimize casualties during an accident? Or what if the law says instead that Charlie’s priority must be to protect its passengers?
In this Essay, I present evidence that the law can influence our moral intuitions about what an artificial intelligence (AI) system chooses to do in such a dilemma. In a randomized survey experiment, Charlie’s decision is presented to all subjects—but some are told that the law says the car must minimize casualties without favoring its own passengers; other subjects are told that the law says the car must prioritize protecting its own passengers over other people; and yet others are told that the law says nothing about this.
To preview the findings: More people believe the sacrifice of the passenger to be morally required when they are told that the law says a driverless car must minimize casualties without favoritism. And more people believe the sacrifice to be morally prohibited when they are told instead that the law says the car must give priority to protecting its own passengers. 5 See infra Part II. As for who people think should bear some of the moral responsibility for the decision, see infra section II.A.
These findings give us a glimpse not of the law’s shadow but of the law’s halo. 6 I borrow this illuminating phrase from Professor Donald Regan. See Donald H. Regan, Law’s Halo, in Philosophy and Law 15, 15 (Jules Coleman & Ellen Frankel Paul eds., 1987) (coining the phrase “law’s moral halo” to explain the “strong inclination” to “invest” law with moral significance, even if one does not believe in a moral obligation to obey the law). For an insightful review of legal and empirical literature on the interplay between law and moral attitudes, see Kenworthey Bilz & Janice Nadler, Law, Moral Attitudes, and Behavioral Change, in The Oxford Handbook of Behavioral Economics and the Law 241, 253–58 (Eyal Zamir & Doron Teichman eds., 2014). And if our moral intuitions about such dilemmas can be swayed by the presence of the law, intriguing implications follow. First is the possibility of a feedback loop, in which an initial choice about which moral principles to embed into the law (say, minimizing casualties) may come to alter our later moral judgments (say, upon hearing about a real-life accident in which the AI chose to sacrifice the passenger), thereby amplifying approval of that same law and others like it. In this way the law may well become “a major focal point for certain pronounced societal dilemmas associated with AI,” 7 Mariano-Florentino Cuéllar, A Common Law for the Age of Artificial Intelligence: Incremental Adjudication, Institutions, and Relational Non-Arbitrariness, 119 Colum. L. Rev. 1773, 1779 (2019). as Justice Mariano-Florentino Cuéllar predicts in his contribution to this Symposium, through its self-reinforcing influence on our collective moral sense.
By illustrating the potential influence of the law in how we judge AI decisions, moreover, this study also complicates the “moral machine” discourse in both its empirical and normative dimensions.
8
This discourse addresses how AI systems should make decisions that involve moral or ethical issues. The empirical literature includes a remarkable recent study that used an online interface to collect millions of crowdsourced answers from around the world about what a driverless car should do in countless variations of such car-crash dilemmas. See Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon & Iyad Rahwan, The Moral Machine Experiment, Nature, Nov. 1, 2018, at 59, 59–60 (describing the setup of the Moral Machine interface and associated data collection). Again, it is very much open to question how such empirical findings might be used, if at all, to guide policymaking. For a small sampling of the growing normative literature, see generally German Fed. Ministry of Transport & Dig. Infrastructure, Ethics Commission: Automated and Connected Driving (2017), https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__
blob=publicationFile [https://perma.cc/ET9P-WYM9] (developing normative principles for ethical decisionmaking by artificial intelligence systems in the context of driverless cars); Wendell Wallach & Colin Allen, Moral Machines: Teaching Robots Right from Wrong (2009) (same, but not limited to the specific context of driverless cars); Alan Winfield, John McDermid, Vincent C. Müeller, Zoë Porter & Tony Pipe, UK-RAS Network, Ethical Issues for Robotics and Autonomous Systems (2019), https://www.ukras.org/wp-content/uploads/2019/07/UK_RAS_AI_ethics_web_72.pdf [https://perma.cc/3XZ6-KJTM] (same); cf. Gary Marcus, Moral Machines, New Yorker (Nov. 24, 2012), https://www.newyorker.com/news/news-desk/moral-machines [https://perma.cc/G4TR-U9ZV] (arguing that “[b]uilding machines with a conscience is a big job, and one that will require the coordinated efforts of philosophers, computer scientists, legislators, and lawyers”). The moral machine discourse overlaps with, or is sometimes included within, other headings such as “machine ethics” or “robot ethics.” See generally Machine Ethics (Michael Anderson & Susan Leigh Anderson, eds., 2011); Robot Ethics 2.0 (Patrick Lin, Keith Abney & Ryan Jenkins eds., 2017).
On the empirical front, these findings suggest that in investigating people’s intuitions about what an AI system should do when facing such a moral dilemma, their prior impressions about the law should be taken into account.
9
This suggestion and the possibility of a feedback loop, noted above, are not unique to the context of autonomous vehicles. Huang, supra note 4, at 695–97 (raising these points in the context of human decisionmakers). In that prior work, I also queried whether such a feedback loop might even give rise to multiple equilibria. Id. at 696. But just to be clear, neither study has sought to investigate the other side of the loop (how moral intuitions about such dilemmas might shape the formation of relevant law).
Although most people might not yet hold any impressions about the laws of driverless cars, their awareness can be expected to grow in coming years, and even now they may already have in mind laws governing human drivers. On the normative front, overlooking the preexisting influence of the law on our moral intuitions would seem remiss if those intuitions, observed or felt, were then regarded as unclouded moral guidance for shaping new laws.
After Part I elaborates on this study’s design, Part II will detail its findings and limitations, while the Conclusion will highlight unanswered questions and fanciful extensions for the future.