Artificial intelligence (AI) is increasingly used to make important deci­sions, from university admissions selections to loan determina­tions to the distribution of COVID-19 vaccines. These uses of AI raise a host of con­cerns about discrimination, accuracy, fairness, and accountability.

In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues in­stead—or re­ally, in addition—for an individual right to contest AI decisions, mod­eled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establish­ment. This Article argues that despite considerable differences be­tween the United States and other coun­tries, establishing the right to contest AI decisions here would be in keep­ing with a long tradition of due process theory.

This Article then fills a gap in the literature, establishing a the­oret­ical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to stand­ards and from emphasizing pro­cedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in-depth consideration of the GDPR’s right to contestation for a U.S. audi­ence. Finally, this Article integrates findings from these investigations to develop nor­mative and practical guidance for establishing a right to con­test AI.

The full text of this Article can be found by clicking the PDF link to the left.


What appeals rights, if any, should people have when they are sub­jected to decision-making by artificial intelligence (AI)? 1 For purposes of discussion, this Article uses “AI” decision-making as a shorthand to refer to decision-making by algorithms more generally. Though computer scientists would not consider all of the algorithms used for decision-making today to qualify as artifi­cial intelligence, decision-making algorithms are rapidly growing more sophisticated. See, e.g., Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305, 1307 (2019) (indicating the range of applications to which decision-making algorithms have been applied, including playing chess and driving vehicles). More practically, even relatively simple algorithms can be used to substitute, in whole or in part, for human decision-making—an extension, or replacement, of human intelligence. Id. at 1335 (“[Legal self-help systems] are simple expert systems—often in the form of chatbots—that provide ordi­nary users with answers to basic legal questions.”).
It is helpful, though, to consider the background behind the shorthand. An algorithm is a computer program. E.g., David Lehr & Paul Ohm, Playing With the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 660–61 (2017). There are many different kinds of algorithms of varying levels of autonomy and sophistica­tion, some of which are collectively referred to as AI: These range from programs that auto­mate fields of human expertise by mapping out what human experts know, to algorithms that scan vast amounts of data, to algorithms that, effectively, create their own rules. See, e.g., Surden, supra, at 1310 (dividing AI into (1) machine learning and (2) logical rules and knowledge representation). Algorithmic decision-making entails using a computer program to make a decision. This can mean taking the decision a computer program gives you as the end result or relying on such a decision as a significant element in human decision-making. See, e.g., Algorithmic Accountability Act, S. 1108, 116th Cong. § 2(1) (2019) (defining an automated decision system as “a computational process . . . , that makes a decision or facili­tates human decision making, that impacts consumers”).
The EU’s recently proposed Artificial Intelligence Act (AIA) similarly defines “AI” ex­pansively. The draft AIA defines an “AI system” as “software that is developed with one or more of the techniques and approaches listed in Annex I,” which include (a) machine-learning approaches, (b) logic- and knowledge-based approaches, and (c) statistical ap­proaches, and that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, at tit. I art. 3(1), annex I, COM (2021) 206 final (Apr. 21, 2021).
The right to chal­lenge de­cisions with significant effects is a core principle of the rule of law. 2 See, e.g., Henry J. Friendly, “Some Kind of Hearing”, 123 U. Pa. L. Rev. 1267 (1975) (“The Court has consistently held that some kind of hearing is required at some time before a person is finally deprived of his property interests.” (quoting Wolff v. McDonnell, 418 U.S. 539, 557–58 (1974))). Yet it is unclear how this principle will fare for significant decisions made or facili­tated by AI.

As data collection and storage have become cheaper, processing has be­come faster, and algorithms have become more complex and more ef­fective at certain tasks, the use of AI in decision-making has increased. The govern­ment and private sector now use algorithms to decide how to dis­tribute wel­fare benefits, 3 See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1252 (2008) [hereinafter Citron, Technological Due Process]; see also Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797, 800–01 (2021); David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey & Mariano-Florentino Cuéllar, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies 17 (2020),
/uploads/2020/02/ACUS-AI-Report.pdf [] (show­casing that AI is used, among other things, in social welfare policy).
whether to hire or fire a person, 4 See Ifeoma Ajunwa, An Auditing Imperative for Automated Hiring Systems, 34 Harv. J.L. & Tech. 621, 631–33 (2021) [hereinafter Ajunwa, Auditing Imperative]; Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo L. Rev. 1671, 1694 (2020) [hereinafter Ajunwa, Paradox of Automation] (citing the example of Goldman Sachs building an algorithmic model to automate all management, including hiring and firing); Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860 (2017). whether expres­sive material should be removed from online platforms, 5 See Rory Van Loo, Federal Rules of Platform Procedure, 88 U. Chi. L. Rev. 829, 836–37 (2021) (discussing the processes used by technology platforms to resolve disputes); infra notes 312–313 and accompanying text. whether to keep people in prison, 6 See, e.g., Jessica M. Eaglin, Constructing Recidivism Risk, 67 Emory L.J. 59, 61 (2017); Deirdre K. Mulligan & Kenneth A. Bamberger, Procurement as Policy: Administrative Process for Machine Learning, 34 Berkeley Tech. L.J. 773, 776 (2019); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343, 1348 (2018). and more. 7 See, e.g., Mulligan & Bamberger, supra note 6, at 784 –85 (collecting examples of government use of algorithmic decision-making, including determining veterans’ disability compensation; evaluating teachers and determining their compensation; identifying chil­dren at risk of abuse or neglect; and allocating public services).

The increasing use of AI to aid or substitute for human decision-making raises the question  of  what,  if  any,  process  should  be  afforded  those  affected  by  these decisions. 8 See, e.g., Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 27 (2014) [hereinafter Citron & Pasquale, Scored Society]; Citron, Technological Due Process, supra note 3, at 1281; Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 109 (2014); Aziz Z. Huq, A Right to a Human Decision, 106 Va. L. Rev. 611, 651 (2020) [hereinafter Huq, A Right to a Human Decision]; Aziz Z. Huq, Constitutional Rights in the Machine-Learning State, 105 Cornell L. Rev. 1875, 1905 (2020) [hereinafter Huq, Constitutional Rights in the Machine-Learning State]. Machine decision-making can be technically inscrutable and thus difficult to contest; it is likely to become even less scrutable as black-box machine-learning techniques expand. 9 See, e.g., Mike Ananny & Kate Crawford, Seeing Without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability, 20 New Media & Soc’y 973, 981–82 (2016); Jenna Burrell, How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, Big Data & Soc’y Jan.–June 2016, at 3 (“At the heart of this challenge is an opacity that relates to the specific techniques used in machine learning.”); Tal Zarsky, The Trouble With Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making, 41 Sci. Tech. & Hum. Values 118, 123–27 (2016) [hereinafter Zarsky, The Trouble With Algorithmic Decisions]. See gen­erally Emre Bayamlıoğlu, Transparency of Automated Decisions in the GDPR: An Attempt for Systemisation 17 (Jan. 16, 2018) (unpublished manuscript) (on file with the Columbia Law Review) [hereinafter Bayamlıoğlu, Transparency of Automated Decisions] (“Automated data-driven systems are distinguished by their complex, increas­ingly autono­mous, and adaptive properties which render their technical dimension and in­ner workings obscure to human cognition.”); Tal Z. Zarsky, Transparent Predictions, 2013 U. Ill. L. Rev. 1503 [hereinafter Zarsky, Transparent Predictions] (describing the im­portance of transpar­ency and advancing a framework for understanding the role it must play in AI regulation). Humans may exhibit an “automation bias” that creates overconfidence in machine decisions, 10 Lee A. Bygrave, Minding the Machine: Article 15 of the EC Data Protection Directive and Automated Profiling, 17 Comput. L. & Sec. Rep. 17, 18 (2001) [hereinafter Bygrave, Minding the Machine] (describing humans’ “automatic acceptance of the validity of the decisions reached”); Citron, Technological Due Process, supra note 3, at 1271–72; Isak Mendoza & Lee A. Bygrave, The Right Not to Be Subject to Automated Decisions Based on Profiling, in EU Internet Law 77, 83 (Tatiani-Eleni Synodinou, Philippe Jougleux, Christiana Markou & Thalia Prastitou eds., 2017) (“The Commission . . . expressed a fear that such processes will cause humans to take for granted the validity of the decisions reached and thereby reduce their own responsibilities to investigate and determine the mat­ters involved.”). and an ensuing bias against challenges to those decisions. 11 See, e.g., Citron, Technological Due Process, supra note 3, at 1271–72. It is unclear how challenges, especially if they come with meaningful process rights, will af­fect the cost efficiencies that automated decision-making promises to de­liver. And if related due process protections such as trans­parency and notice are implemented badly or not at all, meaningful chal­lenges will not be possible.

In the United States, regulatory proposals directed at algorithmic decision-making have largely ignored calls for individual due process in favor of system-wide regulation aimed at risk mitigation. To the extent there has been convergence among recent U.S. policy proposals, it has been on the need for systemic policy solutions, such as algorithmic  impact  assessments  or  auditing,  rather  than  an  individual  right  to contest. 12 See, e.g., Algorithm Accountability Act of 2019, S. 1108, 116th Cong. (2019); H.R. 1655, 66th Leg., 2019 Reg. Sess. (Wash. 2019) (explaining that the act, “[r]elating to estab­lishing guidelines for government procurement and use of automated decision systems,” attempts to establish “algorithmic accountability report[s]”); Margot E. Kaminski, Binary Governance: Lessons From the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529, 1582–1607 (2019) [hereinafter Kaminski, Binary Governance] (contrasting and analyzing the GDPR’s interplay between individual rights and collaborative governance) [hereinafter Kaminski, Binary Governance]. But see California’s newly enacted amendment to the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA), which includes a provision on individual rights re­quiring the California Privacy Protection Agency to issue
regulations governing access and optout rights with respect to busi­nesses use of automated decisionmaking technology, including profiling and re­quiring businesses response to access requests to include meaning­ful in­formation about the logic involved in those decisionmaking pro­cesses, as well as a description of the likely outcome of the process with respect to the consumer.
Cal. Civ. Code § 1798.185(16) (2020).

In Europe, by contrast, regulators are taking a more holistic approach to algorithmic decision-making. The European Union’s (EU) General Data Protection Regulation (GDPR), which went into effect in May 2018, establishes a complex set of regulations of algorithmic decision-making that span multiple contexts and sectors. 13 See Commission Regulation 2016/679, 2016 O.J. (L 119/1) 1 (EU) [hereinafter GDPR]; Kaminski, Binary Governance, supra note 12, at 1538–40 (comparing human and algorithmic decision-making). The GDPR incorporates both systemic governance measures and various individual rights for data sub­jects: transparency, notice, access, a right to object to processing, and, for those subject to automated decision-making, the right to contest certain decisions. 14 See GDPR, supra note 13, art. 22(3).

Likewise, the Council of Europe has articulated a right to contest in its amended data protection convention, known as Convention 108. 15 Council of Eur., Convention 108+: Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data 15 (2018),
convention-108-convention-for-the-protection-of-individuals-with-regar/16808b36f1 [https:
//] [hereinafter Convention 108].
The Council of Europe is an international human rights organization that con­sists of all the EU Member States plus additional non-EU members. 16 Id. at 34. As of now, forty countries have signed on to the amended Convention. 17 Modernisation of the Data Protection “Convention 108”, Council of Eur., [https://perma
.cc/C5FR-HK7E] (last visited July 31, 2021).
Twelve have ratified it. 18 Italy, a 12th Ratification for Convention 108+, Council of Eur. (July 8, 2021), []. The amended Convention states that “[e]very individual shall have a right[] . . . not to be subject to a decision significantly affecting him or her based solely on an automated processing of data without having his or her views taken into consideration.” 19 Convention 108, supra note 15, art. 9(1)(a) (emphasis added). In 2020, the Council of Europe adopted recommendations on AI, explaining that individuals should be provided “effective means to contest relevant determinations and decisions.” 20 Council of Eur., Recommendation CM/Rec(2020)1 of the Committee of Ministers to Member States on the Human Rights Impacts of Algorithmic Systems 9, 13 (2020) [] [hereinafter Council of Eur., Recommendation on the Human Rights Impacts of Algorithmic Systems] (emphasis added).

The right to contest AI is developing traction outside of Europe, too. The Organisation for Economic Co-operation and Development (OECD), an intergovernmental economic organization focused on stimulating world trade, includes a right to contest in its recommendations on AI. 21 OECD, Recommendation of the Council on Artificial Intelligence, § 1.3.iv, OECD Legal Instruments (May 5, 2019),
OECD-LEGAL-0449 (on file with the Columbia Law Review).
The OECD’s recommendations have historically formed the basis of data protection laws around the world, and its recommendations on AI are likely to be similarly influential. Brazil’s comprehensive data protection law, enacted in 2018, includes “the right to request a review of decisions taken” by AI. 22 See General Personal Data Protection Act (LGPD), Law No. 13,709, art. 20, 2018, [] (Braz.). In November 2020, the Office of the Privacy Commissioner of Canada recommended that Canadian data privacy law be revised to in­clude a right to contest AI decisions. 23 A Regulatory Framework for AI: Recommendations for PIPEDA Reform, Off. of the Priv. Comm’r of Can., (Nov. 2020),
consultations/completed-consultations/consultation-ai/reg-fw_202011/ [
E6ZL-AP7H]; see also Ignacio Cofone, Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report, Off. of the Priv. Comm’r of Can. (Nov. 2020), https://www.priv.
/pol-ai_202011/#fn190-rf [] (last updated Nov. 12, 2020).
The proposed amendments to Quebec’s privacy law, Bill 64, include a limited right to contest. 24 An Act to Modernize Legislative Provisions as Regards the Protection of Personal Information, National Assembly of Québec, Bill 64, 102.12.1, 102.12.1(3) (2020) (Can.). In addition to the right to correct erroneous information used to arrive at the decision, “[t]he person concerned must be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.” Id.

Despite this, few have given attention to the right to contest AI. Alt­hough the GDPR’s notice and transparency requirements for AI, espe­cially the so-called “right to explanation,” have attracted a flurry of schol­arly analysis, 25 See, e.g., Margot E. Kaminski, The Right to Explanation, Explained, 34 Berkeley Tech. L.J. 189, 192 n.8 (2019) [hereinafter Kaminski, Right to Explanation, Explained] (cit­ing literature). contestation has not garnered as much attention. 26 A minority of European scholars have discussed contestation. See Mireille Hildebrandt, Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning, 20 Theoretical Inquiries L. 83, 119–20 (2019) [hereinafter Hildebrandt, Privacy as Protection of the Incomputable Self] (“This should result in testable and contest­able decision-systems whose human overlords can be called to account, squarely facing the legal interpretability problem and its relationship with the computer science interpretability problem.”); Mireille Hildebrandt, The Dawn of a Critical Transparency Right for the Profiling Era, in Digital Enlightenment Yearbook 2012, at 41, 49–54 (Jacques Bus Malcolm Crompton, Mireille Hildebrandt & George Metakides eds., 2012); Mendoza & Bygrave, su­pra note 10, at 93–94 (“[A] right of contest is not simply a matter of being able to say ‘stop’, but is akin to a right of appeal . . . . [T]o be meaningful, it must set . . . an obligation to hear and con­sider the merits of the appeal . . . . [I]t must additionally . . . provide . . . reasons for the decision.”). And alt­hough the right to contest is clearly established in the GDPR, regulators have yet to give meaningful guidance on what the right is or how it should be implemented.

This Article takes on the right to contest, both descriptively and nor­matively. It seeks to fill the gap in commentary and bridge the U.S. and EU conversations. This Article is the first to examine at length this right and its content, and the first to provide an in-depth analysis of the GDPR right to contestation for a U.S. audience. 27 Related work that touches on the right to contest includes Emre Bayamlıoğlu, Contesting Automated Decisions: A View of Transparency Implications, 4 Eur. Data Prot. L. Rev. 433, 433–35 (2018) [hereinafter Bayamlıoğlu, Contesting Automated Decisions] (dis­cussing transparency requirements for effective contestation of automated decisions from a European perspective); Bayamlıoğlu, Transparency of Automated Decisions, supra note 9, at 3–4, 17 (proposing a transparency framework from a European perspective); Huq, A Right to a Human Decision, supra note 8, at 621–22 (interpreting Article 22 as establishing a “right to a human decision,” and rejecting such a right). In an article arguing for counter­factuals as a method of providing the “explanation” required by Recital 71 of the GDPR, Sandra Wachter, Brent Mittelstadt, and Chris Russell assert that these explanatory counter­factuals could support contestation rights. Sandra Wachter, Brent Mittelstadt & Chris Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, 31 Harv. J.L. & Tech. 841, 872–78 (2018) [hereinafter Wachter et al., Counterfactual Explanations Without Opening the Black Box].
Deirdre K. Mulligan and coauthors have developed the related concept of “contestable design”: system design that encourages and allows iterative human engagement in a system’s evolution and deployment. Contestable design operates differently from ex post contesta­tion, but systems designed for contestability could support contestation in practice. See, e.g., Daniel Kluttz, Nitin Kohli & Deirdre K. Mulligan, Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions, in After the Digital Tornado: Networks, Algorithms, Humanity 137, 139 (Kevin Webach ed., 2020); Daniel N. Kluttz & Deirdre K. Mulligan, Automated Decision Support Technologies and the Legal Profession, 34 Berkeley Tech. L.J. 861 (2019); Mulligan & Bamberger, supra note 6, at 791, 850–57.

This Article investigates a central question about regulating algorith­mic decision-making: Should there be a right to contest AI decisions? In investigating this question, this Article uncovers and fills a substantial gap in the literature: the lack of a theoretical scaffolding for discussing contes­tation models for privatized process at speed and at scale. This Article probes this question theoretically, considering reasons frequently given for establishing individual due process rights, and comparatively, through in-depth case studies of existing contestation systems. Ultimately, we find merit in the possibility of establishing a right to contest AI, including where decisions are made by private actors, to further due process values. We consider how to design an effective right.

Part I introduces some of the challenges algorithmic decision-making presents and how they might relate to contestation. Part II turns to whether a right to contest AI decision-making can find theoretical pur­chase. Part III looks to models for contestation, establishing four contesta­tion archetypes and examining them in action through comparative case studies. It considers the GDPR’s right to contest and Member State imple­mentations of it, the Digital Millennium Copyright Act’s “notice-and-takedown” scheme for online copyright infringement, the Fair Credit Billing Act’s contestation scheme for credit card charges, and the EU’s so-called “right to be forgotten.” Part IV integrates the findings from these investigations and develops normative and practical guidance for de­sign­ing a right to contest AI.