Many legal scholars have explored how courts can apply legal doctrines, such as procedural due process and equal protection, directly to government actors when those actors deploy artificial intelligence (AI) systems. But very little attention has been given to how courts should hold private vendors of these technologies accountable when the government uses their AI tools in ways that violate the law. This is a concerning gap, given that governments are turning to third-party vendors with increasing frequency to provide the algorithmic architectures for public services, including welfare benefits and criminal risk assessments. As such, when challenged, many state governments have disclaimed any knowledge or ability to understand, explain, or remedy problems created by AI systems that they have procured from third parties. The general position has been “we cannot be responsible for something we don’t understand.” This means that algorithmic systems are contributing to the process of government decisionmaking without any mechanisms of accountability or liability. They fall within an accountability gap.
In response, we argue that courts should adopt a version of the state action doctrine to apply to vendors who supply AI systems for government decisionmaking. Analyzing the state action doctrine’s public function, compulsion, and joint participation tests, we argue that—much like other private actors who perform traditional core government functions at the behest of the state—developers of AI systems that directly influence government decisions should be found to be state actors for purposes of constitutional liability. This is a necessary step, we suggest, to bridge the current AI accountability gap.

The full text of this essay may be found by clicking the PDF link to the left.


Advocates and experts are increasingly concerned about the rapid intro­duction of artificial intelligence (AI) systems in government ser­vices, from facial recognition and autonomous weapons to criminal risk assessments and public benefits administration. 1 See Litigating Algorithms, AI Now Inst., (Sept. 24, 2018), ‌ [] [hereinafter Litigating Algorithms Announcement]; infra section I.A. The term “artificial intelligence” has taken on man‌y meanings, especially in conversations about law and policy. For this Essay, we will use it as a broad umbrella term, covering any computational system that uti­lizes machine learning, including deep learning and reinforcement learning; neural net­works and algorithmic decisionmaking; and other similar techniques to generate predic­tions, classifications, or determinations about individuals or groups. We choose this defini­tion in part because, while some of the systems we discuss may not actively incorpo­rate the most modern AI techniques, they are designed with the same objectives in mind and aim to usher in AI capabilities as soon as they are feasible or available. Every month, more algo­rithmic and predictive technologies are being applied in domains such as healthcare, education, criminal justice, and beyond. 2 See infra section I.A. A range of “advo­cates, academics, and policymakers have raised serious concerns over the use of such systems, which are often deployed without adequate assess­ment, safeguards, [or] oversight.” 3 Litigating Algorithms Announcement, supra note 1. This is due, in part, to the fact that government agencies commonly outsource the development—and some­times the implementation—of these systems to third-party vendors. 4 See, e.g., AI Now Inst., Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems 7 (2018), [] [hereinafter Litigating Algorithms] (“Government agencies adopting these systems commonly enter into contracts with third-party vendors that handle everything.”). This outsourcing often leaves public officials and employees without any real understand­ing of those systems’ inner workings or, more importantly, the variety of risks they might pose. Such risks range from discrimination and disparate treat­ment to lack of due process, discontinuance of essential services, and harm­ful misrepresentations. 5 For a survey of these risks and concerns, see generally Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671 (2016) (using the lens of anti­discrimination law to explore bias arising from data mining); Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014) (warning that additional procedural safeguards are necessary for automated prediction systems); Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249 (2008) (proposing a “technological due process” model to vindicate procedural values in an era of automation); Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93 (2014) (arguing that procedural due process provides a framework for the regulation of big data); David Gray & Danielle Citron, The Right to Quantitative Privacy, 98 Minn. L. Rev. 62 (2013) (raising concerns over the use of algorithmic systems to establish probable cause for law enforcement searches or arrests).

These risks are neither hypothetical nor intangible. Today, AI sys­tems help governments decide everything from whom to release on bail, 6 See Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias, ProPublica (May 23, 2016), []. to how many hours of care disabled individuals will receive, 7 See Colin Lecher, What Happens When an Algorithm Cuts Your Health Care, The Verge (Mar. 21, 2018), []; see also infra section I.A. to which em­ployees should be hired, fired, or promoted. 8 See Miranda Bogden & Aaron Rieke, Upturn, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 1–2 (2018), ‌‌–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf []; Loren Larsen, HireVue Poised to Bring US Government Agencies’ Recruiting Up to Speed, HireVue (May 16, 2019), ‌ []. Yet as decisionmaking shifts from human-only to a mixture of human and algorithm, questions of how to allocate constitutional liability have remained largely unanswered.

The majority of solutions to these concerns have focused on techno­logical or regulatory oversight to address bias, fairness, and due process. 9 See supra note 5. However, to date, few if any of these approaches have succeeded in provid­ing adequate accountability frameworks, either because they have failed to address the larger social and structural aspects of the problems or because there is a lack of political will to implement them. 10 See, e.g., Bogden & Rieke, supra note 8, at 7 (“Structural kinds of bias also act as barri­ers to opportunity for jobseekers, especially when predictive tools are involved.”); Dillon Reisman, Jason Schultz, Kate Crawford & Meredith Whittaker, AI Now Inst., Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability 3 (2018), [] [hereinafter AI Now AIA Report] (proposing a comprehensive framework for assessing the “automated decision systems” of public agencies); Meredith Whittaker, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Myers West, Rashida Richardson, Jason Schultz & Oscar Schwartz, AI Now Inst., AI Now 2018 Report 12 (2018), [] [herein­after AI Now 2018 Report] (“[AI] tools . . . could easily be turned to more surveil­lant ends in the U.S., without public disclosure and oversight, depending on market incen­tives and political will.”).
As such, it is time to consider new paradigms for accountability, especially for poten­tial constitutional violations.

One underexplored approach is the possibility of holding AI ven­dors accountable for constitutional violations under the state action doc­trine. Although state actors are typically governmental employees, a pri­vate party may be deemed a state actor if (1) the private party per­forms a function that is traditionally and exclusively performed by the state, (2) the state directs or compels the private party’s conduct, or (3) the private party acts jointly with the government. 11 Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019); Sybalski v. Indep. Grp. Home Living Program, Inc., 546 F.3d 255, 257 (2d Cir. 2008).

This Essay explores this approach to AI accountability in three parts. Part I outlines the current state of play for government use of AI systems, especially those involved in key governmental decisionmaking processes. Part II reviews the relevant case law and literature on the state action doctrine, focusing on the public function, compulsion, and joint partici­pa­tion theories, and how these theories might apply to vendors of AI sys­tems that government uses. Finally, Part III discusses the normative argu­ments in favor of applying the state action doctrine to close the AI account­ability gap. Specifically, this Essay argues that—unlike traditional technology vendors that supply government actors with primarily func­tional tools, such as a computer operating system, word processing pro­gram, or web browser—AI vendors provide government with tools that assist or supply the core logic, justification, or action that is the source of the constitutional harm. Thus, much like other private parties whose con­duct is fairly attributable to the state, vendors who build AI systems may also subject themselves to constitutional liability.