RULEMAKING AND INSCRUTABLE AUTOMATED DECISION TOOLS

RULEMAKING AND INSCRUTABLE AUTOMATED DECISION TOOLS

Complex machine learning models derived from personal data are increasingly used in making decisions important to peoples’ lives. These automated decision tools are controversial, in part because their opera­tion is difficult for humans to grasp or explain. While scholars and policy­makers have begun grappling with these explainability concerns, the debate has focused on explanations to decision subjects. This Essay ar­gues that explainability has equally important normative and practi­cal ramifications for decision-system design. Automated decision tools are particularly attractive when decisionmaking responsibility is dele­gated and distributed across multiple actors to handle large numbers of cases. Such decision systems depend on explanatory flows among those responsible for setting goals, developing decision criteria, and apply­ing those criteria to particular cases. Inscrutable automated deci­sion tools can disrupt all of these flows.

This Essay focuses on explanation’s role in decision-criteria develop­ment, which it analogizes to rulemaking. It analyzes whether, and how, decision tool inscrutability undermines the traditional func­tions of explanation in rulemaking. It concludes that providing infor­mation about the many aspects of decision tool design, function, and use that can be explained can perform many of those traditional func­tions. Nonetheless, the technical inscrutability of machine learning mod­els has significant ramifications for some decision contexts. Deci­sion tool inscru­tability makes it harder, for example, to assess whether de­cision cri­teria will generalize to unusual cases or new situations and height­ens communication and coordination barriers between data scien­tists and subject matter experts. The Essay concludes with some sug­gested ap­proaches for facilitating explanatory flows for decision-system design.

The full text of this essay may be found by clicking the PDF link to the left.

Introduction

Machine learning models derived from large troves of personal data are increasingly used in making decisions important to peoples’ lives. 1 Alfred Engelberg Professor of Law and Faculty Director of the Information Law Institute, New York University School of Law. I am grateful for excellent research assis­tance from Madeline Byrd and Thomas McBrien and for summer research funding from the Filomen D. Agostino and Max E. Greenberg Research Fund.[1].     See Max Fisher & Amanda Taub, Is the Algorithmification of the Human Experience a Good Thing?, N.Y. Times: The Interpreter (Sept. 6, 2018), https://static.nytimes.com/email-content/INT_5362.html (on file with the Columbia Law Review). These tools have stirred both hopes of improving decisionmaking by avoid­ing human shortcomings and concerns about their potential to amplify bias and undermine important social values. 2 Compare Susan Wharton Gates, Vanessa Gail Perry & Peter M. Zorn, Automated Underwriting in Mortgage Lending: Good News for the Underserved?, 13 Housing Pol’y Debate 369, 370 (2002) (finding that automated underwriting systems more accurately predict mortgage default than humans and result in higher approval rates for underserved applicants), and Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig & Sendhil Mullainathan, Human Decisions and Machine Predictions, 133 Q.J. Econ. 237, 268 (2017) (showing that applying machine learning algorithms to pretrial detention deci­sions could reduce the jailed population by forty-two percent without an increase in crime), with Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 9, 2018), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G [https://perma.cc/6SA7-R35L] (“Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”). It is often hard for humans to grasp or explain how or why machine-learning-based models map input features to output predictions because they often combine large numbers of input features in complicated ways. 3 See, e.g., Finale Doshi-Velez & Mason Kortz, Accountability of AI Under the Law: The Role of Explanation 9–10 (2017), https://cyber.harvard.edu/publications/
2017/11/AIExplanation [https://perma.cc/AQ5V-582E]; Jenna Burrell, How the Machine ‘Thinks’: Understanding Opacity in Machine Learn­ing Algorithms, Big Data & Soc’y, Jan.–June 2016, at 1, 3; Aaron M. Bornstein, Is Artificial Intelligence Permanently Inscrutable?, Nautilus (Sept. 1, 2016), http://nautil.us/issue/40/learning/is-artificial-intelligence-perma­nently-inscrutable [https://perma.cc/B562-NCUN]; see also Info. Law Inst. at N.Y. Univ. Sch. of Law with Foster Provost, Krishna Gummadi, Anupam Datta, Enrico Bertini, Alexandra Chouldechova, Zachary Lipton & John Nay, Modes of Explanation in Machine Learning: What Is Possible and What Are the Tradeoffs?, in Algorithms and Explanations (Apr. 27, 2017), https://youtu.be/U0NsxZQTktk (on file with the Columbia Law Review).
This inherent inscru­tability 4 See Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1094 (2018) (defining “inscrutability” in this context as “a situation in which the rules that govern decision-making are so complex, numerous, and interdependent that they defy practical inspection and resist comprehension”). has drawn the attention of data scientists, 5 See generally Finale Doshi-Velez & Been Kim, Towards a Rigorous Science of Interpretable Machine Learning, in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics 1 (2018) (cataloging various ways to define and evaluate interpretability in machine learning); Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter & Lalana Kagal, Explaining Explanations: An Overview of Interpretability of Machine Learning, in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics 80 (2018) (“While interpretability is a substantial first step, these mechanisms need to also be complete, with the capacity to defend their actions, provide relevant responses to questions, and be audited.”); Zachary C. Lipton, The Mythos of Model Interpretability, ACMQueue (July 17, 2018), ‌https://queue.acm.org/detail.cfm?id=3241340 [https://perma.cc/CZH3-S9JG] (discussing “the feasibility and desirability of different notions of interpretability” in machine learning). legal scholars, 6 See, e.g., Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For, 16 Duke L. & Tech. Rev. 18, 19–22 (2017); Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson & Harlan Yu, Accountable Algorithms, 165 U. Pa. L. Rev. 633, 636–42 (2017); Selbst & Barocas, supra note 4; Andrew D. Selbst, Response, A Mild Defense of Our New Machine Overlords, 70 Vand. L. Rev. En Banc 87, 88–89 (2017), ‌https://cdn.vanderbilt.edu/vu-wp0/wp content/uploads/sites/278/2017/05/
23184939/A-Mild-Defense-of-Our-New-Machine-Overlords.pdf [‌https://perma.cc/MCW7-X89L]; Tal Z. Zarsky, Transparent Predictions, 2013 U. Ill. L. Rev. 1503, 1506–09; Robert H. Sloan & Richard Warner, When Is an Algorithm Transparent?: Predictive Analytics, Privacy, and Public Policy, IEEE Security & Privacy, May/June 2018, at 18, 18.
policymakers, 7 See, e.g., Algorithmic Accountability Act of 2019, S. 1108, 116th Cong. (2019). and others 8 See, e.g., Reuben Binns, Algorithmic Accountability and Public Reason, 31 Phil. & Tech. 543, 543–45 (2018); Tim Miller, Explanation in Artificial Intelligence: Insights from the Social Sciences, 267 Artificial Intelligence 1, 1–2 (2019); Brent Mittelstadt, Chris Russell & Sandra Wachter, Explaining Explanations in AI, in FAT*’19 at 279, 279 (2019); Deirdre K. Mulligan, Daniel N. Kluttz & Nitin Kohli, Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions, in After the Digital Tornado (Kevin Werbach ed., forthcoming 2020) (manuscript at 1–2), https://ssrn.com/
abstract=3311894 (on file with the Columbia Law Review); Sandra Wachter, Brent Mittelstadt & Chris Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, 31 Harv. J.L. & Tech. 841, 842–44 (2018); Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi, The Ethics of Algorithms: Mapping the Debate, Big Data & Soc’y, July–Dec. 2016.
to the explainability problem.

This discourse has focused primarily on explanations provided to deci­sion subjects. For example, the European Union’s General Data Protection Regulation (GDPR) arguably gives decision subjects a “right to explanation,” 9 The GDPR requires that data subjects be informed of “the existence of auto­mated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the signifi­cance and the envisaged consequences of such processing for the data subject.” Commission Regulation 2016/679, art. 13(2)(f), 2016 O.J. (L 119) 1.
It further provides a limited “right not to be subject to a decision based solely on auto­mated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Id. art. 22(1). For the debate about what the GDPR’s requirements entail, see, e.g., Bryan Casey, Ashkon Farhangi & Roland Vogl, Rethinking Explainable Machines: The GDPR’s “Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise, 34 Berkeley Tech. L.J. 143, 153–68 (2019); Talia B. Gillis & Josh Simons, Explanation < Justification: GDPR and the Perils of Privacy, Pa. J.L. & Innovation (forthcoming 2019) (manuscript at 2–4), https://ssrn.com/abstract=3374668 (on file with the Columbia Law Review); Margot E. Kaminski, The Right to an Explanation, Explained, 34 Berkeley. Tech. L.J. 189, 192–93 (2019); Andrew D. Selbst & Julia Powles, Meaningful Information and the Right to Explanation, 7 Int’l Data Privacy L. 233, 233–34 (2017); Michael Veale & Lilian Edwards, Clarity, Surprises, and Further Questions in the Article 29 Working Part Draft Guidance on Automated Decision-Making and Profiling, 34 Computer L. & Security Rev. 398, 398–99 (2018); Wachter et al., supra note 8, at 861–65; Andy Crabtree, Lachlan Urquhart & Jiahong Chen, Right to an Explanation Considered Harmful (Apr. 8, 2019) (unpublished manuscript), https://ssrn.com/abstract=3384790 (on file with the Columbia Law Review).
reflecting the common premise that “[t]o justify a decision-making procedure that involves or is constituted by a machine learning model, an individual sub­ject to that decision-making procedure requires an explanation of how the machine learn­ing model works.” 10 Gillis & Simons, supra note 9 (manuscript at 11) (emphasis added). Some scholars have criticized this fo­cus, empha­sizing the importance of public accountability. 11 For the most part, this emphasis is recent. See, e.g., Doshi-Velez & Kortz, supra note 3, at 3–9 (describing the explanation system’s role in public accountability); Hannah Bloch-Wehba, Access to Algorithms, 88 Fordham L. Rev. (forthcoming 2019) (manuscript at 4–9), https://ssrn.com/abstract=3355776 (on file with the Columbia Law Review) (“These features . . . have prompted calls for new mechanisms of transparency and account­a­bility in the age of algorithms.”); Robert Brauneis & Ellen P. Goodman, Algo­rithmic Transparency for the Smart City, 20 Yale J.L. & Tech. 103, 132 (2018) (“Such account­ability requires not perfect transparency . . . but . . . meaningful transparency.”); Gillis & Simons, supra note 9 (manuscript at 11–12) (“Explanations of machine learning models are certainly not sufficient for many of the most important forms of justification in modern democracies . . . .”); Selbst & Barocas, supra note 4, at 1087 (“[F]aced with a world increasingly dominated by automated decision-making, advocates, policymakers, and legal scholars would call for machines that can explain themselves.”); Jennifer Cobbe, Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making, Legal Stud. (July 9, 2019), ‌‌https://www.cambridge.org/core/journals/legal-stud­ies/article/administrative-law-and-the-machines-of-government-judicial-review-of-auto­mated-publicsector-decisionmaking/09CD6B470DE4ADCE3EE8C94B33F46FCD/core-reader (on file with the Columbia Law Review) (“Legal standards and review mecha­nisms which are primarily concerned with decision-making processes, which examine how decisions were made, cannot easily be applied to opaque, algorithmically-produced deci­sions.”). But, for a truly pathbreak­ing consideration of these issues, see Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1258 (2008) (“This techno­logical due process provides new mecha­nisms to replace the procedural regimes that auto­mation endangers.”). Talia Gillis and Josh Simons, for example, contrast “[t]he focus on individual, tech­nical expla­nation . . . driven by an uncritical bent to­wards transparency” with their argument that “[i]nstitutions should jus­tify their choices about the de­sign and integration of machine learning models not to indi­viduals, but to empowered regulators or other forms of public oversight bodies.” 12 Gillis & Simons, supra note 9 (manuscript at 6–12); see also David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 708–09 (2017) (emphasizing the many choices involved in imple­menting a machine learning model and the different sorts of explanations that could be made). Taken together, these threads suggest the view of explanatory flows in decisionmaking illustrated in Figure 1, in which decisionmakers justify their choices by explaining case-by-case outcomes to decision sub­jects and separately explaining design choices regarding automated deci­sion tools to the public and oversight bodies.

Figure 1: Schematic of Explanatory Flows in a Simple Decision System

Many real-world decision systems require significantly more complex explanatory flows, however, because decisionmaking responsibility is dele­gated and distributed across multiple actors to handle large numbers of cases. Delegated, distributed decision systems commonly include agenda setters, who determine the goals and purposes of the systems; rulemakers tasked with translating agenda setters’ goals into decision criteria; and adjudicators, who apply those criteria to particular cases. 13 The terms “adjudication” and “rulemaking” are borrowed, loosely, from administra­tive law. See 5 U.S.C. § 551 (2012); see also, e.g., id. §§ 553–557. The general paradigm in Figure 2 also describes many private decision systems. In dem­o­cra­cies, the ultimate agenda setter for government decisionmaking is the public, often represented by legislatures and courts. The public also has a role in agenda setting for many private decision systems, such as those related to employment and credit. 14 See infra section III.B.2. Figure 2 illustrates the explan­atory flows required by a delegated, distributed decision system.

Figure 2: Schematic of Explanatory Flows in a Delegated, Distributed Decision System

Delegation and distribution of decisionmaking authority, while often neces­sary and effective for dealing with agenda setters’ limited time and expertise, proliferate explanatory information flows. Delegation, whether from the public or a private agenda setter, creates the potential for prin­cipal–agent problems and hence the need for accountability mecha­nisms. 15 See Kathleen M. Eisenhardt, Agency Theory: An Assessment and Review, 14 Acad. Mgmt. Rev. 57, 61 (1989) (“The agency problem arises because (a) the principal and the agent have different goals and (b) the principal cannot determine if the agent has be­haved appropriately.”); see also Gillis & Simons, supra note 9 (manuscript at 6–10) (argu­ing for a principal–agent framework of accountability in considering government use of machine learning). Explanation requirements, including a duty to inform principals of facts that “the principal would wish to have” or “are material to the agent’s duties,” are basic mechanisms for ensuring that agents are account­able to principals. 16 Restatement (Third) of Agency § 8.11 (Am. Law Inst. 2005). Distribution of responsibility multiplies these principal–agent concerns, while adding an underappreciated layer of explanatory flows necessary for coordination among decision-system actors. 17 See supra Figure 2.

Automated decision tools are particularly attractive to designers of dele­gated, distributed decision systems because their deployment pro­mises to improve consistency, decrease bias, and lower costs. 18 See, e.g., Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1160 (2017) [herein­after Coglianese & Lehr, Regulating by Robot] (“Despite this interpretive limitation, ma­chine-learning algorithms have been implemented widely in private-sector settings. Companies desire the savings in costs and efficiency gleaned from these techniques . . . .”). For exam­ple, such tools are being used or considered for decisions involving pre­trial detention, 19 See, e.g., Jessica M. Eaglin, Constructing Recidivism Risk, 67 Emory L.J. 59, 61 (2017). sentencing, 20 See, e.g., State v. Loomis, 881 N.W.2d 749, 753 (Wis. 2016). child welfare, 21 See, e.g., Dan Hurley, Can an Algorithm Tell When Kids Are in Danger?, N.Y. Times Mag. (Jan. 2, 2018), https://www.nytimes.com/2018/01/02/magazine/can-an-algo­rithm-tell-when-kids-are-in-danger.html (on file with the Columbia Law Review). credit, 22 See, e.g., Matthew Adam Bruckner, The Promise and Perils of Algorithmic Lenders’ Use of Big Data, 93 Chi.-Kent L. Rev. 3, 12–13 (2018). employment, 23 See, e.g., Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860 (2017). and tax auditing. 24 See, e.g., Kimberly A. Houser & Debra Sanders, The Use of Big Data Analytics by the IRS: Efficient Solutions or the End of Privacy as We Know It?, 19 Vand. J. Ent. & Tech. L. 817, 819–20 (2017). Unfortunately, the inscrutability of many machine-learn­ing-based decision tools creates barriers to all of the explanatory flows illustrated in Figure 2. 25 See infra section IV.B. Expanding the focus of the explainability debate to include public accountability is thus only one step toward a more realistic view of the ramifications of decision tool inscrutability. Be­fore incorporating machine-learning-based decision tools into a dele­gated, distributed decision system, agenda setters should have a clear-eyed view of what information is feasibly available to all of the system’s actors. This would enable them to assess whether that information, com­bined with other mechanisms, can provide a sufficient level of account­ability 26 See, e.g., Bloch-Wehba, supra note 11 (manuscript at 27–28) (discussing the chal­lenge of determining adequate public disclosure of algorithm-based government decision­making); Brauneis & Goodman, supra note 11, at 166–67 (“Governments should con­sciously generate—or demand that their vendors generate—records that will further pub­lic understanding of algorithmic processes.”); Citron, supra note 11, at 1305–06 (argu­ing that mandatory audit trails “would ensure that agencies uniformly provide detailed notice to individuals”); Gillis & Simons, supra note 9 (manuscript at 2) (“Accountability is achieved when an institution must justify its choices about how it developed and imple­mented its decision-making procedure, including the use of statistical techniques or ma­chine learning, to an individual or institution with meaningful powers of oversight and enforcement.”); Selbst & Barocas, supra note 4, at 1138 (“Where intuition fails, the task should be to find new ways to regulate machine learning so that it remains accountable.”). and coordination to justify the use of a particular automated deci­sion tool in a particular context.

Incorporating inscrutable automated decision tools has ramifi­ca­tions for all stages of delegated, distributed decisionmaking. This Essay focuses on the implications for the creation of decision criteria-–-or rule­making. 27 Elsewhere, I focus on the implications for adjudication. Katherine J. Strandburg, Adjudicating with Inscrutable Decision Rules, in Machine Learning and Society: Impact, Trust, Transparency (Marcello Pelillo & Teresa Scantamburlo eds., forthcoming 2020) (on file with the Columbia Law Review). As background for the analysis, Part I briefly compares auto­mated, machine-learning-based decision tools to more familiar forms of decisionmaking criteria. Part II uses the explanation requirements em­bedded in administrative law as a springboard to analyze the functions that explanation has conventionally been expected to perform with re­gard to rulemaking. Part III considers how incorporating inscrutable ma­chine-learning-based decision tools changes the potential effectiveness of explanations for these functions. Part IV concludes by suggesting ap­proaches that may alleviate these problems in some contexts.