GOVERNING ONLINE SPEECH:
FROM “POSTS-AS-TRUMPS” TO PROPORTIONALITY AND PROBABILITY

GOVERNING ONLINE SPEECH:
FROM “POSTS-AS-TRUMPS” TO PROPORTIONALITY AND PROBABILITY

Online speech governance stands at an inflection po­int. The state of emergency that platforms invoked during the COVID-19 pandemic is subsiding, and lawmakers are poised to transform the regulatory landscape. What emerges from this moment will shape the most important channels for communication in the modern era and have profound consequences for individuals, societies, and democratic governance. Tracing the path to this point illuminates the tasks that the institutions created during this transformation must be designed to do. This history shows that where online speech governance was once dominated by the First Amendment tradition’s categorical and individualistic approach to adjudicating speech conflicts, that approach became strained, and online speech governance now revolves around two other principles: proportionality and probability. Proportionality requires no longer focusing on the speech interest in an individual post alone, but also taking account of other societal interests that can justify proportionate limitations on content. But the unfathomable scale of online speech makes enforcement of rules only ever a matter of probability: Content moderation will always involve error, and so the pertinent questions are what error rates are reasonable and which kinds of errors should be preferred. Platforms’ actions during the pandemic have thrown into stark relief the centrality of these principles to online speech governance and also how undertheo­rized they remain. This Article reviews the causes of this shift from a “posts-as-trumps” approach to online speech governance to one of systemic balancing and what this new era of content moderation entails for platforms and their regulators.

The full text of this Article can be found by clicking the PDF link to the left.

Twitter being abused to instill fear, to silence your voice, or to undermine individual safety, is unacceptable.

— @TwitterSafety, October 3, 2020 1 Twitter Safety (@TwitterSafety), Twitter (Oct. 3, 2020), https://twitter.com/TwitterSafety/status/1312498519094091779 (on file with the Columbia Law Review) (emphasis added).

 

A commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression we do it in service of one or more of the following values: Authenticity . . . Safety . . . Privacy . . . Dignity.

— Monika Bickert, Facebook Vice President, Global Policy Management, September 12, 2019 2 Monika Bickert, Updating the Values that Inform Our Community Standards, Facebook: Newsroom (Sept. 12, 2019), https://about.fb.com/news/2019/09/updating-the-values-that-inform-our-community-standards [https://perma.cc/8MF6-N8WV] [hereinaf­ter Bickert, Updating the Values].

 

I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency.

— Steven Huffman, Reddit CEO, June 29, 2020 3 Casey Newton, Reddit Bans r/The_Donald and r/ChapoTrapHouse as Part of a Major Expansion of Its Rules, Verge (June 29, 2020), https://www.theverge.com/2020/6/29/21304947/reddit-ban-subreddits-the-donald-chapo-trap-house-new-content-policy-rules (on file with the Columbia Law Review) [hereinafter Newton, Reddit Bans].

Introduction

On March 6, 2020, Facebook announced it was banning ads for medical face masks across its platforms to prevent people from trying to exploit the COVID-19 public health emergency for commercial gain. 4 Guy Rosen, An Update on Our Work to Keep People Informed and Limit Misinformation About COVID-19, Facebook: Newsroom (Apr. 16, 2020), https://about.fb.com/news/2020/04/covid-19-misinfo-update [https://perma.cc/6JMQ-GAV9] [hereinaf­ter Rosen, Facebook COVID-19 Update] (last updated May 12, 2020). A month later, the New York Times reported that Facebook’s ban was hampering volunteer efforts to create handsewn masks for medical professionals, as Facebook’s automated content moderation systems over-enforced the mask-ad ban. 5 Mike Isaac, Facebook Hampers Do-It-Yourself Mask Efforts, N.Y. Times (Apr. 5, 2020), https://www.nytimes.com/2020/04/05/technology/coronavirus-facebook-masks.html (on file with the Columbia Law Review). At the same time, BuzzFeed reported, Facebook was still profiting off scammers running mask ads not caught by those same systems. 6 Craig Silverman, Facebook Banned Mask Ads. They’re Still Running., BuzzFeed News (May 13, 2020), https://www.buzzfeednews.com/article/craigsilverman/facebook-mask-ads-ban-zestads-coronavirus [https://perma.cc/8Q8E-H7WW] [hereinafter Silverman, Facebook Mask-Ad Ban]. On June 10, 2020, Facebook noted that authorities’ guidance on wearing masks had “evolved” since the start of the pandemic, and the ban would be scaled back to permit promotion of nonmedical masks. 7 Rob Leathern, Allowing the Promotion of Non-Medical Masks on Facebook, Facebook for Business (June 10, 2020), https://www.facebook.com/business/news/allowing-the-promotion-of-non-medical-masks-on-facebook [https://perma.cc/R788-X6R7] (last updated Aug. 19, 2020). This was well after many experts had begun recommending masks, 8 See Zeynep Tufekci, Jeremy Howard & Trisha Greenhalgh, The Real Reason to Wear a Mask, Atlantic (Apr. 22, 2020), https://www.theatlantic.com/health/archive/2020/04/dont-wear-mask-yourself/610336 (on file with the Columbia Law Review) (discussing advice from experts to wear masks). but only shortly after the WHO changed its guidance. 9 Sarah Boseley, WHO Advises Public to Wear Face Masks When Unable to Distance, Guardian (June 5, 2020), https://www.theguardian.com/world/2020/jun/05/who-changes-advice-medical-grade-masks-over-60s [https://perma.cc/JGV5-7JPL].

This mask-ad ban example is a microcosm of the key challenges of content moderation on the largest social media platforms. The scale at which these platforms operate means mistakes in enforcing any rule are inevitable: It will always be possible to find examples of both false positives (taking down volunteer mask makers) and false negatives (mask ads being approved to run on the site). In writing and enforcing a mask-ad ban, then, the issue is not simply whether such a ban is good in principle but also how to make trade-offs between speed, nuance, accuracy, and over- or under-enforcement. Whether to enact a ban in the first place is fraught too. Platforms justified their unusually interventionist approach to false information in the context of the COVID-19 pandemic in part because there were more clear-cut “authoritative” sources of information, such as the WHO, to which they could defer. 10 See, e.g., Press Release, Facebook, Facebook Press Call 17 (Mar. 18. 2020), https://about.fb.com/wp-content/uploads/2020/03/March-18-2020-Press-Call-Transcript.pdf [https://perma.cc/3XCR-PD8M] [hereinafter Facebook Press Call] (“[T]he WHO for example . . . have broad trust and a government mandate on [COVID-19] in a way that in other domains there just (isn’t) something like that.”). So what should platforms do when, as in the case of masks, those authorities increasingly contradict scientific consensus, or in other contexts where such clearly identifiable authorities do not exist?

There are no easy answers, but moving the project of online speech governance forward requires asking the right questions. Instead of think­ing about content moderation through an individualistic lens typical of constitutional jurisprudence, platforms, regulators, and the public at large need to recognize that the First Amendment–inflected approach to online speech governance that dominated the early internet no longer holds. Instead, platforms are now firmly in the business of balancing societal interests and choosing between error costs on a systemic basis. This Article shows that these choices are endemic to every aspect of modern online speech governance and suggests that this requires a recalibration of our understanding of content moderation—the systems for writing and enforcing the rules for what social media platforms allow on their services.

This project of recalibration is urgent: Online speech governance stands at an inflection point. Lawmakers in the United States and abroad are poised to radically transform the existing legal landscape (and in some cases have already started doing so); 11 See Spandana Singh, Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content 9–11 (2019), https://d1y8sb8igg2f8e.cloudfront.net/documents/Everything_in_Moderation_2019-07-15_142127_tq36vr4.pdf [https://perma.cc/96D9-CUW9]. platforms are both trying to get ahead of these developments and playing catch-up to societal demands for more responsible content moderation through self-regulatory innovations and reforms. 12 See, e.g., Evelyn Douek, “What Kind of Oversight Board Have You Given Us?”, U. Chi. L. Rev. Online (May 11, 2020), https://lawreviewblog.uchicago.edu/2020/05/11/fb-oversight-board-edouek [https://perma.cc/V329-H8Y8] (exploring the design and potential of the Facebook Oversight Board, an “unprecedented experiment in content moderation governance”). Content moderation entered a “state of emergency” during the COVID-19 pandemic, 13 Evelyn Douek, The Internet’s Titans Make a Power Grab, Atlantic (Apr. 18, 2020), https://www.theatlantic.com/ideas/archive/2020/04/pandemic-facebook-and-twitter-grab-more-power/610213 (on file with the Columbia Law Review) [hereinafter Douek, The Internet’s Titans]. but the emergency is starting to subside. The governance institutions that emerge from this upheaval will define the future of online speech and, with it, modern public discourse.

Designing these institutions requires understanding the evolution of platform governance so far and what this reveals about the underlying dynamics of content moderation. That story shows that content moderation on major platforms, once dominated by a categorical and individualistic conception of online speech rights, is now crafted around two different precepts: proportionality and probability. That is, content moderation is a question of systemic balancing: Rules are written to encompass multiple interests, not just individual speech rights, and with awareness of the error rates inherent in enforcing any rule at the truly staggering scale of major platforms.

Recognizing this shift illuminates the nature of adjudication required. 14 See Alec Stone Sweet & Jud Mathews, Proportionality Balancing and Constitutional Governance: A Comparative and Global Approach 13–14 (2019) [hereinafter Stone Sweet & Mathews, Proportionality Balancing and Constitutional Governance] (noting that different conceptions of rights “produce different approaches to rights adjudication”). Decisions centered around proportionality and probability are different in kind. Proportionality necessitates intrusions on rights being justified, and greater intrusions having stronger justifications. 15 Vicki C. Jackson, Constitutional Law in an Age of Proportionality, 124 Yale L.J. 3094, 3117–18 (2015). In constitutional systems, proportionality takes various doctrinal forms but always involves a balancing test that requires the decisionmaker to balance societal interests against individual rights. 16 Richard H. Fallon, Jr., Strict Judicial Scrutiny, 54 UCLA L. Rev. 1267, 1296 (2007). This emphasis on justification and balancing therefore takes the decisionmaker from being a mere “taxonomist[]” 17 Kathleen M. Sullivan, Post-Liberal Judging: The Roles of Categorization and Balancing, 63 U. Colo. L. Rev. 293, 293 (1992). (categorizing types of content) to grocer (placing com­peting rights and interests on a scale and weighing them against each other) 18 Id. at 293–94. or epidemiologist (assessing risks to public health). 19 John Bowers & Jonathan Zittrain, Answering Impossible Questions: Content Governance in an Age of Disinformation, Harv. Kennedy Sch. Misinfo. Rev., Jan. 2020, at 1, 4–5. This task requires much greater transparency of reasoning.

Meanwhile, a probabilistic conception of online speech acknowledges that enforcement of the rules made as a result of this balancing will never be perfect, and so governance systems should take into account the inevi­tability of error and choose what kinds of errors to prefer. 20 See infra section II.B. The conscious acceptance of the fact that getting speech determinations wrong in some percentage of cases is inherent in online speech governance requires being much more candid about error rates, which can allow for the calibration of rulemaking to the practical realities of enforcement.

The arrival of this new era in online speech governance is increasingly apparent, even if usually only implicitly acknowledged. Professor Jonathan Zittrain has observed a move from a “rights” era of online governance to a “public health” one that requires weighing risks and benefits of speech. 21 See Jonathan Zittrain, Three Eras of Digital Governance 1 (Sept. 15, 2019) (unpublished manuscript), https://www.ssrn.com/abstract=3458435 (on file with the Columbia Law Review) [hereinafter Zittrain, Three Eras]. Professor Tim Wu describes the “open and free” speech ideal of the first twenty years of the internet changing “decisively” to a “widespread if not universal emphasis among the major platforms . . . on creating ‘healthy’ and ‘safe’ speech environments online.” 22 Tim Wu, Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems, 119 Colum. L. Rev. 2001, 2009 (2019) [hereinafter Wu, Will Artificial Intelligence Eat the Law]. Contract for the Web, founded by Tim Berners-Lee, inventor of the World Wide Web, has called for companies to address the “risks created by their technologies,” including their online content, alongside their benefits. 23 Principle 6: Develop Technologies that Support the Best in Humanity and Challenge the Worst, Cont. for the Web, https://contractfortheweb.org/principles/principle-6-develop-technologies-that-support-the-best-in-humanity-and-challenge-the-worst [https://perma.cc/L4F2-7DNJ] (last visited Oct. 23, 2020). It is now fairly common to hear calls that “[c]ontent moderation on social platforms needs to balance the impact on society with the individual rights of speakers and the right for people to consume the content of their choice.” 24 Mathew Ingram, Former Facebook Security Chief Alex Stamos Talks About Political Advertising, Galley by CJR, https://galley.cjr.org/public/conversations/-LyjQOoPX4-yK-H78Mw6 (on file with the Columbia Law Review) (last visited Oct. 23, 2020) (emphasis added). A civil rights audit of Facebook admonished the company for still taking an unduly “selective view of free expression as Facebook’s most cherished value” without accounting for impacts on other rights. 25 Facebook’s Civil Rights Audit—Final Report 9 (2020), https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf [https://perma.cc/E5SX-CPWK].

Facebook’s update to the “values” that inform its Community Standards is perhaps the starkest example of the dominance of this new paradigm. 26 See Bickert, Updating the Values, supra note 2. Where once Facebook emphasized connecting people, 27 See, e.g., Note from Mark Zuckerberg, Facebook: Newsroom (Apr. 27, 2016), https://newsroom.fb.com/news/2016/04/marknote [https://perma.cc/9QAG-ESGL] (stating Facebook’s mission as “mak[ing] the world more open and connected”). it now acknowledges that voice should be limited for reasons of authenticity, safety, privacy, and dignity. 28 See Evelyn Douek, Why Facebook’s “Values” Update Matters, Lawfare (Sept. 16, 2019), https://www.lawfareblog.com/why-facebooks-values-update-matters [https://perma.cc/3ZDK-VXPK] [hereinafter Douek, Why Facebook’s Update Matters]. As a result, “Although the Community Standards do not explicitly reference proportionality, the method de­scribed . . . invokes some elements of a traditional proportionality test.” 29 Matthias C. Kettemann & Wolfgang Schulz, Setting Rules for 2.7 Billion: A (First) Look into Facebook’s Norm-Making System: Results of a Pilot Study 20 (Hans-Bredow-Institut Working Paper No. 1, 2020), https://www.hans-bredow-institut.de/uploads/media/default/cms/media/k0gjxdi_AP_WiP001InsideFacebook.pdf [https://perma.cc/8EB5-XSVS]. Similarly, Twitter CEO Jack Dorsey has acknowledged that Twitter’s early rules “likely over-rotated on one value” and the platform would now root its rules in “human rights law,” 30 Jack Dorsey (@jack), Twitter (Aug. 10, 2018), https://twitter.com/jack/status/1027962500438843397 (on file with the Columbia Law Review). which includes a proportionality test. 31 See, e.g., David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, at 4, U.N. Doc. A/HRC/38/35 (Apr. 6, 2018) (identifying “proportionality” as one of the requirements for “State limitations on freedom of expression”).

Similarly, there has been increasing acknowledgment that enforce­ment of rules will never be perfect. 32 See, e.g., Monika Bickert, Facebook, Charting a Way Forward: Online Content Regulation 7 (2020), https://about.fb.com/wp-content/uploads/2020/02/Charting-A-Way-Forward_Online-Content-Regulation-White-Paper-1.pdf [https://perma.cc/2E37-RRRH] [hereinafter Bickert, Charting a Way Forward] (“[I]nternet companies’ enforcement of content standards will always be imperfect.”). That is, content moderation will always be a matter of probability. Tech companies and commentators accept that the volume of speech made tractable and, therefore, in some sense governable as a result of that speech migrating online makes it unrealistic to expect rules to be applied correctly in every case. 33 See infra section I.C.1. The discourse is (slowly) shifting from simple exhortations to “do better” and “nerd harder,” 34 Evelyn Douek, Australia’s “Abhorrent Violent Material” Law: Shouting “Nerd Harder” and Drowning Out Speech, 94 Austl. L.J. 41, 50 n.77 (2020) [hereinafter Douek, Nerd Harder]. to more nuanced conversations about how to align incentives so that all relevant interests are balanced and unavoidable error costs are not disproportionately assigned in any direction. 35 See, e.g., Bickert, Charting a Way Forward, supra note 32; French Sec’y of State for Digit. Affs., Creating a French Framework to Make Social Media Platforms More Accountable: Acting in France with a European Vision: Regulation of Social Networks—Facebook Experiment 13–14 (2019), https://www.numerique.gouv.fr/uploads/Regulation-of-social-networks_Mission-report_ENG.pdf [https://perma.cc/UAM5-ZP5K] (advo­cating for public policy that balances punitive and preventative approaches).

Content moderation practices during the COVID-19 pandemic have epitomized this new paradigm, 36 See infra section I.D. throwing into sharp relief the interest balancing and error choices that platforms make. Platforms cracked down on misinformation in an unprecedented fashion because the harms were judged to be especially great. 37 See infra section I.D. They did this despite acknowledging that circumstances meant there would be higher error rates than normal be­cause the costs of moderating inadequately were less than the costs of not moderating at all. 38 See infra section I.D.2. But this apparently exceptional content moderation during the pandemic was only a more exaggerated version of how content moderation works all the time. 39 Evelyn Douek, COVID-19 and Social Media Content Moderation, Lawfare (Mar. 25, 2020), https://www.lawfareblog.com/covid-19-and-social-media-content-moderation [https://perma.cc/6MF4-29PQ] [hereinafter Douek, COVID-19 and Social Media Content Moderation].

What this paradigm shift means for platform governance and its regulation remains undertheorized but is especially important to examine now for two reasons. First, without adapting speech governance to the very different nature of the task being undertaken—systemic balancing instead of individual categorization—platform decisionmaking processes and the rules that govern online speech will continue to be viewed as illegitimate. Because there is no “right” answer to most, if not all, of the questions involved in writing rules for online speech, the rule-formation process is especially important for garnering public acceptance and legitimacy. 40 See Ben Bradford, Florian Grisel, Tracey L. Meares, Emily Owens, Baron L. Pineda, Jacob N. Shapiro, Tom R. Tyler & Danieli Evans Peterman, Report of the Facebook Data Transparency Advisory Group 34–39 (2019), https://law.yale.edu/sites/default/files/area/center/justice/document/dtag_report_5.22.2019.pdf [https://perma.cc/3AKC-UUWP] [hereinafter Facebook Data Transparency Advisory Group] (“Facebook could build public trust and legitimacy . . . by following principles of procedural justice in its interactions with users.”); Tom R. Tyler, Procedural Justice, Legitimacy, and the Effective Rule of Law, 30 Crime & Just. 283, 284 (2003) (highlighting several studies that suggest “people’s willingness to accept the constraints of law . . . is strongly linked to their evaluations of the procedural justice of the police and the courts”); Rory Van Loo, Federal Rules of Platform Procedure, U. Chi. L. Rev. (forthcoming) (manuscript at 28), https://ssrn.com/abstract=3576562 (on file with the Columbia Law Review) (“[T]here is strong evidence that the added trust and legitimacy gained from effective dispute resolution systems improves a company’s profitability due to better customer retention and increased customer engagement.”).

Second, regulators around the world are currently writing laws to change the regulatory landscape for online speech. In the United States in particular, the law that “created the internet” 41 Jeff Kosseff, The Twenty-Six Words that Created the Internet 8 (2019). —Section 230 of the Communications Decency Act 42 47 U.S.C. § 230 (2018). —is increasingly under siege across the political spectrum, with its reform seemingly imminent. 43 See, e.g., Editorial, Section 230 Does Not Need a Revocation. It Needs a Revision., Wash. Post (June 28, 2020), https://www.washingtonpost.com/opinions/trump-and-biden-both-want-to-repeal-this-tech-rule-theyre-both-wrong/2020/06/28/4de6f9fc-b4b1-11ea-a8da-693df3d7674a_story.html (on file with the Columbia Law Review) (noting that both President Trump and Joe Biden have called for the repeal of Section 230). But changing the regulatory environment without a proper understanding of content moderation in practice will make the laws ineffective or, worse, create unintended consequences. Regulators need to understand the inherent characteristics of the systems they seek to reform. Regulation that entrenches one right or interest without acknowledging the empirical realities of how online speech operates and is constantly changing, or that adopts a punitive approach focused on individual cases, will fail to bring the accountability that is currently lacking from platforms without neces­sarily protecting those harmed by their decisions. 44 See infra section III.B.2. This Article therefore offers an account of the role of proportionality and probability in online speech governance and the questions it raises for such governance and its regulation.

This Article concentrates on tech platforms’ role as the current primary rulemakers and enforcers of online content regulation, the focus of a rapidly growing literature. 45 See Hannah Bloch-Wehba, Automation in Moderation, Cornell Int’l L.J. (forthcoming 2020) (manuscript at 4 n.11), https://ssrn.com/abstract=3521619 (on file with the Columbia Law Review) [hereinafter Bloch-Wehba, Automation in Moderation]; Van Loo, supra note 40 (manuscript at 3). This is for two reasons. First, content moderation will always go beyond what governments can constitutionally provide for. The First Amendment would not permit laws requiring removal of content like the Christchurch Massacre livestream, 46 E.g., Brandenburg v. Ohio, 395 U.S. 444, 447–49 (1969) (holding that a state may forbid speech that advocates violence only if the speech is intended to provoke imminent illegal activity and is likely to do so). violent animal crush videos, 47 United States v. Stevens, 559 U.S. 460, 481–82 (2010) (striking down a federal law that criminalized depictions of animal cruelty under the First Amendment’s overbreadth doctrine). or graphic pornography, 48 Am. Booksellers Ass’n v. Hudnut, 771 F.2d 323, 332–34 (7th Cir. 1985), aff’d 475 U.S. 1001 (1986). for example, but few would disagree that platforms should have some license to moderate this content to protect their services from becoming unusable. How far this license should extend may be contested, but it is relatively uncontroversial that private actors can restrict more speech than governments. Second, the scale of online content will make private platforms’ role as the frontline actors in content moderation an ongoing practical necessity. Governments will not have the resources or technical capacity to take over.

As much as platforms are building bureaucracies and norms in a way that can resemble those of governments, 49 See generally Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018) (discussing the ways in which internet platforms have developed detailed systems for governing online speech that are rooted in the American legal system). they remain private actors with obvious business interests and are unencumbered by the constraints of public law. The project of online speech governance centers around the question of how to square this triangle 50 Jack M. Balkin, Free Speech Is a Triangle, 118 Colum. L. Rev. 2011, 2012 (2018); Robert Gorwa, The Platform Governance Triangle: Conceptualising the Informal Regulation of Online Content, Internet Pol’y Rev., June 2019, at 1, 2. of unaccountable private actors exercising enormous power over systemically important public communication while accepting the constitutional and practical limitations of government regulation. This Article’s contribution to that task is to describe and give a conceptual framework to the radical changes that have occurred in the actual operation of content moderation in the last half decade alone. Part I begins by describing the categorical and individualis­tic paradigm of early content moderation—what this Article calls its “posts-as-trumps” era—and how this has given way to an era defined by proportionality and probability in online speech governance. This Article argues that this governance based on systemic balancing is both normatively and pragmatically a better fit for the modern realities of online speech. Descriptively, these principles already shape online speech, whether explicitly acknowledged or not. A case study of platform content moderation during the COVID-19 pandemic illustrates this starkly. Part II turns to the questions that governance based on proportionality and probability raises for decisionmakers and shows that the failure to adequately address these has left current governance arrangements fundamentally unstable and unsatisfying.

Part III turns to the urgent project of addressing these deficiencies. This Article argues that, despite first appearances, systemic balancing in online speech governance need not entail a devaluing or deflation of speech rights. In fact, as a methodological approach, it does not demand any particular substantive results and could result in more speech-protective rules. The critical point is that recognizing the role of systemic balancing orients debates around the right questions. This Article there­fore turns to what these questions are for both platforms and regulators, and discusses their impact on what content moderation should look like in a post-pandemic world.

Online speech governance is a wicked problem with unenviable and perhaps impossible trade-offs. There is no end-state of content modera­tion with stable rules or regulatory forms; it will always be a matter of contestation, iteration, and technological evolution. That said, this is an unusual period of disruption and experimentation, as the prevailing forms of internet governance have become inadequate and new systems are emerging to replace them. Understanding what tasks these institutions must be designed to fulfill is the first step to evaluating, improving, and regulating them.