A recurrent concern about machine learning algorithms is that they operate as “black boxes,” making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges are confronting machine learning algorithms with increasing frequency, including in criminal, administrative, and civil cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of “explainable AI” (xAI). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts. There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
Introduction
A recurrent concern about machine learning algorithms is that they operate as “black boxes.” Because these algorithms repeatedly adjust the way that they weigh inputs to improve the accuracy of their predictions, it can be difficult to identify how and why the algorithms reach the outcomes they do. Yet humans—and the law—often desire or demand answers to the questions “Why?” and “How do you know?” One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. Sometimes called “explainable AI” (xAI), legal and computer science scholarship has identified various actors who could benefit from (or who should demand) xAI. These include criminal defendants who receive long sentences based on opaque predictive algorithms,
1
See, e.g., Megan T. Stevenson & Christopher Slobogin, Algorithmic Risk Assessments and the Double-Edged Sword of Youth, 36 Behav. Sci. & L. 638, 639 (2018).
military commanders who are considering whether to deploy autonomous weapons,
2
See Matt Turek, Explainable Artificial Intelligence (XAI), DARPA, https://www.darpa.
mil/program/explainable-artificial-intelligence [https://perma.cc/ZNL9-86CF] (last visited Aug. 13, 2019).
and doctors who worry about legal liability for using “black box” algorithms to make diagnoses.
3
W. Nicholson Price II, Medical Malpractice and Black-Box Medicine, in Big Data, Health Law, and Bioethics 295, 295–96 (I. Glenn Cohen, Holly Fernandez Lynch, Effy Vayena & Urs Gasser eds., 2018).
At the same time, there is a robust—but largely theoretical—debate about which algorithmic decisions require an explanation and which forms these explanations should take.
Although these conversations are critically important, they ignore a key set of actors who will interact with machine learning algorithms with increasing frequency and whose lifeblood is real-world controversies: judges. 4 See, e.g., Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking for, 16 Duke L. & Tech. Rev. 18, 67 (2017) [hereinafter Edwards & Veale, Slave to the Algorithm] (questioning whether xAI will be useful because “[i]ndividual data subjects are not empowered to make use of the kind of algorithmic explanations they are likely to be offered” but ignoring the possible role for courts as users of xAI). This Essay argues that judges will confront a variety of cases in which they should demand explanations for algorithmic decisions, recommendations, or predictions. If and as they demand these explanations, judges will play a seminal role in shaping the nature and form of xAI. Using the tools of the common law, courts can develop what xAI should mean in different legal contexts, including criminal, administrative, and civil cases. Further, there are advantages to having courts play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. 5 Cf. Andrew Tutt, An FDA for Algorithms, 69 Admin. L. Rev. 83, 109 (2017) (proposing a federal statutory standard for explainability and arguing that “[i]f explainability can be built into algorithmic design, the presence of a federal standard could nudge companies developing machine-learning algorithms into incorporating explainability from the outset”). I share Andrew Tutt’s view that it is possible to provide incentives for designers to incorporate xAI into their products, but I believe that there are advantages to developing these rules using common law processes. In addition, courts are likely to stimulate (directly or indirectly) the production of different forms of xAI that are responsive to distinct legal settings and audiences. At a more theoretical level, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.
Part I of this Essay introduces the idea of xAI. It identifies the types of concerns that machine learning raises and that xAI may assuage. It then considers some forms of xAI that currently exist and discusses the advantages to each form. Finally, it identifies some of the basic xAI-related choices judges will need to make when they need or wish to understand how a given algorithm operates.
Against that background, the Essay then turns to two concrete areas of law in which judges are likely to play a critical role in fleshing out whether xAI is required and, if so, what forms it should take. Part II considers the use of machine learning in agency rulemaking and adjudication and argues that judges should insist on some level of xAI in evaluating the reasons an agency gives when it produces a rule or decision using algorithmic processes. 6 For the argument that judicial review of agency rulemaking employs common law methodologies, see Jack M. Beermann, Common Law and Statute Law in Administrative Law, 63 Admin. L. Rev. 1, 3 (2011). Further, if agencies employ advanced algorithms to help them sort through high volumes of comments on proposed rules, judges should seek explanations about those algorithms’ parameters and training. 7 See Melissa Mortazavi, Rulemaking Ex Machina, 117 Colum. L. Rev. Online 202, 207–08 (2017), https://live-columbia-law-review.pantheonsite.io/wp-content/uploads/2017/09/Mortavazi-v5.0.pdf [https://perma.cc/SF8R-EG9C] (examining the possibility that agencies may deploy automated notice-and-comment review). In both cases, if judges demand xAI as part of the agency’s reason-giving process, agency heads themselves will presumably insist that their agencies regularly employ xAI in anticipation of litigation.
Part III explores the use of predictive algorithms in criminal sentencing. These algorithms predict the likelihood that a defendant will commit additional crimes in the future. Here, the judge herself is the key consumer of the algorithm’s recommendations, and has a variety of incentives—including the need to give reasons for a sentence, concerns about reversal on appeal, a desire to ensure due process, and an interest in demonstrating institutional integrity—to demand explanations for how the sentencing algorithm functions.
As courts employ and develop existing case law in the face of predictive algorithms that arise in an array of litigation, they will create the “common law of xAI,” law sensitive to the requirements of different audiences (judges, juries, plaintiffs, or defendants) and different uses for the explanations given (criminal, civil, or administrative law settings). 8 See Finale Doshi-Velez & Mason Kortz, Berkman Klein Ctr. Working Grp. on Explanation & the Law, Accountability of AI Under the Law: The Role of Explanation 12 (2017), https://arxiv.org/pdf/1711.01134.pdf [https://perma.cc/LQB3-HG7L] (“As we have little data to determine the actual costs of requiring AI systems to generate explanations, the role of explanation in ensuring accountability must also be re-evaluated from time to time, to adapt with the ever-changing technology landscape.”). A nuanced common law of xAI will also provide important incentives and feedback to algorithm developers as they seek to translate what are currently theoretical debates into concrete xAI tools. 9 At least one scholarly piece has concluded that “there is some danger of research and legislative efforts being devoted to creating rights to a form of transparency that may not be feasible, and may not match user needs.” Edwards & Veale, Slave to the Algorithm, supra note 4, at 22. A common law approach to xAI can help ensure that the solutions are both feasible and match user needs in specific cases. Courts should focus on the power of xAI to identify algorithmic error and bias and the need for xAI to be comprehensible to the relevant audience. Further, they should be attuned to dynamic developments in xAI decisions across categories of cases when looking for relevant precedent and guidance.