Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to industry to help it avoid preventable accidents. It will also assist a judiciary that is increasingly called upon to develop common law in response to legal disputes arising out of the deployment of AI.
The full text of this essay may be found by clicking the PDF link to the left.
Introduction
Corporations will increasingly attempt to substitute artificial intelligence (AI) and robotics for human labor.
1
See, e.g., James Manyika, Susan Lund, Michael Chui, Jacques Bughin, Jonathan Woetzel, Parul Batra, Ryan Ko & Saurabh Sanghvi, McKinsey & Co., Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages 1 (2017), https://
www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Future%20of%20Organizations/What%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-December-6-2017.ashx [https://perma.cc/
XCF4-JJPC].(describing the far-reaching impact that automation will have on the global workforce).
This evolution will create novel situations for tort law to address. However, tort will only be one of several types of law at play in the deployment of AI. Regulators will try to forestall problems by developing licensing regimes and product standards. Corporate lawyers will attempt to deflect liability via contractual arrangements.
2
This is already a common practice in the digital economy. See, e.g., Timothy J. Calloway, Cloud Computing, Clickwrap Agreements, and Limitation on Liability Clauses: A Perfect Storm?, 11 Duke L. & Tech. Rev. 163, 173 (2012) (describing a proliferation of limitation of liability clauses); Aaron T. Chiu, Note, Irrationally Bound: Terms of Use Licenses and the Breakdown of Consumer Rationality in the Market for Social Network Sites, 21 S. Cal. Interdisc. L.J. 167, 195 (2011) (describing the use of “disclaimers of liability” in social media network use agreements). For a practical example of how contracts are used to deflect, allocate, or redirect liability in the construction industry, see generally Patricia D. Galloway, The Art of Allocating Risk in an EPC Contract to Minimize Disputes, Construction Law., Fall 2018, at 26 (discussing risk allocation in engineering, procurement, and construction (EPC) contracts). In the health care context, “hold harmless” clauses can deflect liability from software providers. See Ross Koppel, Uses of the Legal System that Attenuate Patient Safety, 68 DePaul L. Rev. 273, 275-76 (“The ‘hold harmless’ clause in EHR [Electronic Health Record] contracts functions to prevent vendors from being held responsible for errors in their software even if the vendor has been repeatedly informed of the problem and even if the problem causes harm or death to patients.”).
The interplay of tort, contract, and regulation will not just allocate responsibility ex post, spreading the costs of accidents among those developing and deploying AI, their insurers, and those they harm. This matrix of legal rules will also deeply influence the development of AI, including the industrial organization of firms, and capital’s and labor’s relative share of productivity and knowledge gains.
Despite these ongoing efforts to anticipate the risks of innovation, there is grave danger that AI will become one more tool for deflecting liability, like the shell companies that now obscure and absorb the blame for much commercial malfeasance.
3
As leading AI ethics expert Joanna Bryson has explained:
“Many of the problems we have in the world today come from people trying to evade the accountability of democracies and regulatory bodies. And AI would be the ultimate shell company. If AI is human-like, the argument goes, then you can use human justice on it. But that’s just false. You can’t even use human justice against shell companies. And there’s no way to build AI that can actually care about avoiding corruption or obeying the law. So it would be a complete mistake—a huge legal, moral and political hazard—to grant rights to AI.”
Fraser Myers, AI: Inhuman After All?, Spiked-Online (June 14, 2019), https://www.spiked-online.com/2019/06/14/ai-inhuman-after-all/ [https://perma.cc/A26G-YEX4] (conducting an interview with Bryson).
The perfect technology of irresponsible profit would be a robot capable of earning funds for a firm, while taking on the regulatory, compliance, and legal burden traditionally shouldered by the firm itself. Any proposal to grant AI “personhood” should be considered in this light.
4
See Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant, Of, for, and by the People: The Legal Lacuna of Synthetic Persons, 25 Artificial Intelligence & L. 273, 273 (2017) (“We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We conclude that difficulties in holding ‘electronic persons’ accountable when they violate the rights of others outweigh the . . . moral interests that AI legal personhood might protect.”).
Moreover, both judges and regulators should begin to draw red lines of responsibility and attribution now, while the technology is still nascent.
5
Some may argue it is already too late, thanks to the power of leading firms in the AI space. However, there have been many recent efforts to understand and curb the worst effects of such firms. The U.S. government has demonstrated an interest in keeping large tech companies in line. For example, Facebook is currently facing a $5 billion fine from the FTC, a $100 million fine from the SEC, and an FTC antitrust investigation. Ian Sherr, Facebook’s $5 Billion FTC Fine Is Just the Start of Its Problems, CNET (July 25, 2019), https://www.cnet.com/news/facebooks-5-billion-ftc-fine-is-just-the-start-of-its-problems/ (on file with the Columbia Law Review). The Department of Justice is also reviewing tech companies for antitrust issues. Brent Kendall, Justice Department to Open Broad, New Antitrust Review of Big Tech Companies, Wall St. J. (July 23, 2019), https://www.wsj.com/
articles/justice-department-to-open-broad-new-antitrust-review-of-big-tech-companies-11563914235 (on file with the Columbia Law Review). In response, tech companies, such as Facebook and Google, have expanded their lobbying capacity. See Cecilia Kang & Kenneth P. Vogel, Tech Giants Amass a Lobbying Army for an Epic Washington Battle, N.Y. Times (June 5, 2019), https://www.nytimes.com/2019/06/05/us/politics/amazon-apple-facebook-google-lobbying.html (on file with the Columbia Law Review).
It may seem difficult to draw such red lines, because both journalists and technologists can present AI as a technological development that exceeds the control or understanding of those developing it.
6
See, e.g., Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/V3LF-KBLD] (describing Nvidia’s experimental autonomous car as having a “mysterious mind” unable to be understood by those designing it); David Weinberger, Our Machines Now Have Knowledge We’ll Never Understand, WIRED (Apr. 18, 2017), https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/ [https://perma.cc/FW94-L2BE] (“This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition.”).
However, the suite of statistical methods at the core of technologies now hailed as AI has undergone evolution, not revolution.
7
See, e.g., Best Practice AI, Evolution, Not Revolution: What the Bestpractice.ai Library Tells Us About the State of AI (Part 1), Medium (Sept. 17, 2018), https://medium.com/
@bestpracticeAI/evolution-not-revolution-what-the-bestpractice-ai-library-tells-us-about-the-state-of-ai-part-1-f488b29add0b [https://perma.cc/VB86-544K] (describing findings from the development of Bestpractice.ai, a library of AI use cases and case studies).
Large new sources of data have enhanced its scope of application, as well as technologists’ ambitions.
8
See generally Yoav Shoham, Raymond Perrault, Erik Brynjolfsson, Jack Clark, James Manyika, Juan Carlos Niebles, Terah Lyons, John Etchemendy, Barbara Grosz & Zoe Bauer, Artificial Intelligence Index: 2018 Annual Report (2018), http://cdn.aiindex.org/2018/
AI%20Index%202018%20Annual%20Report.pdf [https://perma.cc/3PWE-B7Z8] (presenting data suggesting that the number of patents and academic papers involving AI, among other metrics, have grown rapidly).
But the same types of doctrines applied to computational sensing, prediction, and actuation in the past can also inform the near future of AI advance.
9
Notable recent U.S. work in this vein includes Bryan Casey, Robot Ipsa Loquitur, Geo. L.J. (forthcoming 2019) (manuscript at 8–11), https://ssrn.com/abstract=3327673 (on file with the Columbia Law Review) (arguing that extant forms of liability should apply to robotics and thus many of the forms of AI that comprise the information processing of such robotics and can address many of the problems posed by such technology).
A company deploying AI can fail in many of the same ways as a firm using older, less avant-garde machines or software. This Essay focuses on one particular type of failing that can lead to harm: the use of inaccurate or inappropriate data in training sets for machine learning. Firms using faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such faulty data collection, analysis, and use is repeated or willful. Skeptics may worry that judges and juries are ill-equipped to make determinations about appropriate data collection, analysis, and use. However, they need not act alone—regulation of data collection, analysis, and use already exists in other contexts. 10 See infra Part II. Such regulation not only provides guidance to industry to help it avoid preventable accidents and other torts. It also assists judges assessing standards of care for the deployment of emerging technologies. The interplay of federal regulation of health data with state tort suits for breach of confidentiality is instructive here: Egregious failures by firms can not only spark tort liability but also catalyze commitments to regulation to prevent the problems that sparked that liability, which in turn should promote progress toward higher standards of care. 11 See infra Part II.
Preserving the complementarity of tort law and regulation in this way (rather than opting to radically diminish the role of either of these modalities of social order, as premature preemption or deregulation might do) is wise for several reasons. First, this hybrid model expands opportunities for those harmed by new technologies to demand accountability. 12 See Mary L. Lyndon, Tort Law and Technology, 12 Yale J. on Reg. 137, 143 (1995) (“The liability system supplements regulation.”). Second, the political economy of automation will only fairly distribute expertise and power if law and policy create ongoing incentives for individuals to both understand and control the AI supply chain and AI’s implementation. Judges, lawmakers, and advocates must avoid developing legal and regulatory systems that merely deflect responsibility, rather than cultivate it, lest large firms exploit well-established power imbalances to burden consumers and workers with predictable harms arising out of faulty data.