* Professor of Law, Fordham Law School.
The full text of this essay may be found by clicking the PDF link to the left.
Tim Wu’s essay, Will Artificial Intelligence Eat the Law?, posits that automated decisionmaking systems may be taking the place of human adjudication in social media content moderation. Conventional adjudicative processes, he explains, are too slow or clumsy to keep up with the speed and scale of online information flows. Their eclipse is imminent, inevitable, and, he concludes, just as well. 1 Tim Wu, Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems, 119 Colum. L. Rev. 2001, 2001–02 (2019) [hereinafter Wu, AI Eat the Law].
Wu’s essay does not really indulge in the romantic tropes about cyborg robot overlords, nor does he seem to hold a conceit about the superiority of networked technologies. He does not promise, for example, anything similar to Mark Zuckerberg’s prophecy to Congress in spring 2018 that artificial intelligence would soon cure Facebook of its failings in content moderation. 2 Sarah Jeong, AI Is an Excuse for Facebook to Keep Messing Up, The Verge (Apr. 13, 2018), https://www.theverge.com/2018/4/13/17235042/facebook-mark-zuckerberg-ai-artificial-intelligence-excuse-congress-hearings [https://perma.cc/6JB6-NJSG]. To the contrary, Wu here is sober about the private administration of consumer information markets. After all, he has been among the most articulate proponents of positive government regulation in this area for almost two decades. The best we can do, Wu argues, is create hybrid approaches that carefully integrate artificial intelligence into the content moderation process. 3 Wu, AI Eat the Law, supra note 1, at 2001–05.
But in at least two important ways, Wu’s essay masks important challenges. First, by presuming the inevitability of automated decisionmaking systems in online companies’ distribution of user-generated content and data, Wu obscures the indispensable role that human managers at the Big Tech companies have in developing and selecting their business designs, algorithms, and operational techniques for managing content distribution. 4 By Big Tech companies, I refer to the dozen or so internet companies that dominate the networked information economy, but especially the “big five”: Facebook, Alphabet (the owner of Google), Microsoft, Amazon, and Apple. For the purposes of this Response, under this coinage, I also include Twitter, the social media company which, after Facebook-owned entities, has the second-largest U.S. user base. See J. Clement, Most Popular Social Networks in the United States in October 2018, Based on Active Monthly Users (in Millions), Statista, https://www.statista.com/statistics/247597/global-traffic-to-leading-us-social-networking-sites/ [https://perma.cc/3Z7L-R5FX] (last visited Sept. 26, 2019). These companies deploy these resources to further their bottom-line interests in enlarging user engagement and dominating markets. 5 See Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media 33–34 (2019) (discussing how “moderation and screening are crucial steps that protect [internet companies’] corporate or platform brand . . . and contribute positively to maintaining an audience of users willing to upload and view content on their sites”). In this way, social media content moderation is really only a tool for achieving these companies’ central objectives. Wu’s essay also says close to nothing about the various resources at work “behind the screens” that support this commercial mission. 6 See id. at 34–35 (noting that while some content moderation is well suited for “machine-automated filtering,” the majority of such work requires human screeners that are “called upon to employ an array of high-level cognitive functions and cultural competencies to make decisions about their appropriateness for a site or platform”). While he recognizes that tens of thousands of human reviewers exist, for example, Wu downplays the companies’ role as managers of massive transnational production lines and employers of global labor forces. These workers and the proprietary infrastructure with which they engage are invaluable to the distribution of user-generated content and data.
Second, the claim that artificial intelligence is eclipsing law is premature, if not just a little misleading. There is nothing inevitable about the private governance of online information flows when we do not yet know what law can do in this area. This is because courts have abjured their constitutional authority to impose legal duties on online intermediaries’ administration of third-party content. The prevailing judicial doctrine under section 230 of the Communications Act (as amended by the Communications Decency Act) 7 47 U.S.C. § 230 (2012). (section 230) allows courts to adjudicate the question of intermediary liability for user-generated content when the service at issue “contributes materially” to that content. 8 Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1168 (9th Cir. 2008); accord Jones v. Dirty World Entm’t Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014) (describing Roommates.com as the “leading case” and applying the “contributes materially” standard); FTC v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009) (applying the “contributes materially” standard). This is to say that the common law has not had a meaningful hand in shaping intermediaries’ moderation of user-generated content because courts, citing section 230, have foresworn the law’s application. Defamation, fraud, and consumer protection law, for example, generally hold parties legally responsible for disseminating unlawful information that originates with third parties. But, under the prevailing section 230 doctrine, powerful companies like Facebook, Google, and Amazon do not have any legal obligation to block or remove user-generated content that they have no hand in “creat[ing]” or “develop[ing].” 9 47 U.S.C. § 230(f)(3). This is a standard that requires a substantial amount of involvement on the part of online companies to justify liability. This is why it is not quite right to say, as Wu does here, that we are witnessing the retreat of judicial decisionmaking in this setting. There has never been the chance to see what even modest run-of-the-mill judicial adjudication of content moderation decisions looks like since Congress enacted section 230 over twenty years ago.
The view of online content moderation that Wu advances here is pristine. Its exclusive focus on the ideal Platonic form of speech moderation resonates with the view that the internet can be an open and free forum for civic republican deliberation. 10 Compare Robert D. Putnam, Bowling Alone: The Collapse and Revival of American Community 245–46 (2000) (noting that “the rise of electronic communications and entertainment,” especially television, coincided with “the national decline in social connectedness” and civic disengagement among younger generations, although the evidence was not conclusive on causality), with Cass Sunstein, Republic.com 8–9, 167–70 (2001) (suggesting that while technology has given more power to consumers “to filter what they see,” a “widely publicized deliberative domain[] on the Internet, ensuring opportunities for discussion among people with diverse views,” would aid in maintaining a “well-functioning system of free expression”). This approach also recalls far more theoretical treatments of “discourse ethics.” See generally Jurgen Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy (1996) (arguing for a shift from communicative to strategic action in which parties to a dispute spend less time debating validity claims and more time bargaining with threats and promises). In this vein, he appeals to the healthy constitutional skepticism in the United States about government regulation of expressive conduct. One might associate his arguments here with other luminaries who have proposed that we use communication technologies to create opportunities for discovery and progress. 11 See Yochai Benkler, Wealth of Networks: How Social Production Transforms Markets and Freedom 2 (2006) (describing a “new information environment” in which users play “a more active role” and its potential to “serve as a platform for better democratic participation” and as a “mechanism to achieve improvements in human development everywhere”); Lawrence Lessig, Code and Other Laws of Cyberspace 7–8 (1999) (arguing that cyberspace as an open commons is key to checking government control and advocating for open code); Ithiel de Sola Pool, Technologies of Freedom 10 (1983) (“The onus is on us to determine whether free societies in the twenty-first century will conduct electronic communication under the conditions of freedom established for the domain of print through centuries of struggle, or whether that great achievement will become lost in a confusion about new technologies.”).
In any case, by presenting the issue of content moderation as a battle between human adjudication and artificial intelligence, Wu’s essay here fails to identify the industrial designs, regulatory arrangements, and human labor that have put the Big Tech companies in their position of control. It does not really engage the political economy and structural arrangements that constitute and condition online content moderation.
I generally admire and subscribe to Wu’s various accounts and critiques of the networked information economy. He is a clear and eloquent spokesperson for why positive procompetitive regulation and consumer protection in communications markets are vital to the operation of democracy. I, therefore, take his recent essay, and its relatively light touch on the Big Tech companies’ content moderation choices, as being addressed to whom he says it is addressed: the designers of these new hybrid processes. In contrast, this Response is addressed to policymakers and reformers: the very people whom Wu has inspired with his other writing. I offer this caveat to say that Wu and I may not actually have a disagreement as a matter of substance. I will just use this generous opportunity to respond to his essay by identifying the reasons we cannot afford to turn away from the lived political economy that shapes our networked world.