Twitter being abused to instill fear, to silence your voice, or to undermine individual safety, is unacceptable.
— @TwitterSafety, October 3, 2020
A commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression we do it in service of one or more of the following values: Authenticity . . . Safety . . . Privacy . . . Dignity.
— Monika Bickert, Facebook Vice President, Global Policy Management, September 12, 2019
I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency.
— Steven Huffman, Reddit CEO, June 29, 2020
Introduction
On March 6, 2020, Facebook announced it was banning ads for medical face masks across its platforms to prevent people from trying to exploit the COVID-19 public health emergency for commercial gain.
A month later, the New York Times reported that Facebook’s ban was hampering volunteer efforts to create handsewn masks for medical professionals, as Facebook’s automated content moderation systems over-enforced the mask-ad ban.
At the same time, BuzzFeed reported, Facebook was still profiting off scammers running mask ads not caught by those same systems.
On June 10, 2020, Facebook noted that authorities’ guidance on wearing masks had “evolved” since the start of the pandemic, and the ban would be scaled back to permit promotion of nonmedical masks.
This was well after many experts had begun recommending masks,
but only shortly after the WHO changed its guidance.
This mask-ad ban example is a microcosm of the key challenges of content moderation on the largest social media platforms. The scale at which these platforms operate means mistakes in enforcing any rule are inevitable: It will always be possible to find examples of both false positives (taking down volunteer mask makers) and false negatives (mask ads being approved to run on the site). In writing and enforcing a mask-ad ban, then, the issue is not simply whether such a ban is good in principle but also how to make trade-offs between speed, nuance, accuracy, and over- or under-enforcement. Whether to enact a ban in the first place is fraught too. Platforms justified their unusually interventionist approach to false information in the context of the COVID-19 pandemic in part because there were more clear-cut “authoritative” sources of information, such as the WHO, to which they could defer.
So what should platforms do when, as in the case of masks, those authorities increasingly contradict scientific consensus, or in other contexts where such clearly identifiable authorities do not exist?
There are no easy answers, but moving the project of online speech governance forward requires asking the right questions. Instead of thinking about content moderation through an individualistic lens typical of constitutional jurisprudence, platforms, regulators, and the public at large need to recognize that the First Amendment–inflected approach to online speech governance that dominated the early internet no longer holds. Instead, platforms are now firmly in the business of balancing societal interests and choosing between error costs on a systemic basis. This Article shows that these choices are endemic to every aspect of modern online speech governance and suggests that this requires a recalibration of our understanding of content moderation—the systems for writing and enforcing the rules for what social media platforms allow on their services.
This project of recalibration is urgent: Online speech governance stands at an inflection point. Lawmakers in the United States and abroad are poised to radically transform the existing legal landscape (and in some cases have already started doing so);
platforms are both trying to get ahead of these developments and playing catch-up to societal demands for more responsible content moderation through self-regulatory innovations and reforms.
Content moderation entered a “state of emergency” during the COVID-19 pandemic,
but the emergency is starting to subside. The governance institutions that emerge from this upheaval will define the future of online speech and, with it, modern public discourse.
Designing these institutions requires understanding the evolution of platform governance so far and what this reveals about the underlying dynamics of content moderation. That story shows that content moderation on major platforms, once dominated by a categorical and individualistic conception of online speech rights, is now crafted around two different precepts: proportionality and probability. That is, content moderation is a question of systemic balancing: Rules are written to encompass multiple interests, not just individual speech rights, and with awareness of the error rates inherent in enforcing any rule at the truly staggering scale of major platforms.
Recognizing this shift illuminates the nature of adjudication required.
Decisions centered around proportionality and probability are different in kind. Proportionality necessitates intrusions on rights being justified, and greater intrusions having stronger justifications.
In constitutional systems, proportionality takes various doctrinal forms but always involves a balancing test that requires the decisionmaker to balance societal interests against individual rights.
This emphasis on justification and balancing therefore takes the decisionmaker from being a mere “taxonomist[]”
(categorizing types of content) to grocer (placing competing rights and interests on a scale and weighing them against each other)
or epidemiologist (assessing risks to public health).
This task requires much greater transparency of reasoning.
Meanwhile, a probabilistic conception of online speech acknowledges that enforcement of the rules made as a result of this balancing will never be perfect, and so governance systems should take into account the inevitability of error and choose what kinds of errors to prefer.
The conscious acceptance of the fact that getting speech determinations wrong in some percentage of cases is inherent in online speech governance requires being much more candid about error rates, which can allow for the calibration of rulemaking to the practical realities of enforcement.
The arrival of this new era in online speech governance is increasingly apparent, even if usually only implicitly acknowledged. Professor Jonathan Zittrain has observed a move from a “rights” era of online governance to a “public health” one that requires weighing risks and benefits of speech.
Professor Tim Wu describes the “open and free” speech ideal of the first twenty years of the internet changing “decisively” to a “widespread if not universal emphasis among the major platforms . . . on creating ‘healthy’ and ‘safe’ speech environments online.”
Contract for the Web, founded by Tim Berners-Lee, inventor of the World Wide Web, has called for companies to address the “risks created by their technologies,” including their online content, alongside their benefits.
It is now fairly common to hear calls that “[c]ontent moderation on social platforms needs to balance the impact on society with the individual rights of speakers and the right for people to consume the content of their choice.”
A civil rights audit of Facebook admonished the company for still taking an unduly “selective view of free expression as Facebook’s most cherished value” without accounting for impacts on other rights.
Facebook’s update to the “values” that inform its Community Standards is perhaps the starkest example of the dominance of this new paradigm.
Where once Facebook emphasized connecting people,
it now acknowledges that voice should be limited for reasons of authenticity, safety, privacy, and dignity.
As a result, “Although the Community Standards do not explicitly reference proportionality, the method described . . . invokes some elements of a traditional proportionality test.”
Similarly, Twitter CEO Jack Dorsey has acknowledged that Twitter’s early rules “likely over-rotated on one value” and the platform would now root its rules in “human rights law,”
which includes a proportionality test.
Similarly, there has been increasing acknowledgment that enforcement of rules will never be perfect.
That is, content moderation will always be a matter of probability. Tech companies and commentators accept that the volume of speech made tractable and, therefore, in some sense governable as a result of that speech migrating online makes it unrealistic to expect rules to be applied correctly in every case.
The discourse is (slowly) shifting from simple exhortations to “do better” and “nerd harder,”
to more nuanced conversations about how to align incentives so that all relevant interests are balanced and unavoidable error costs are not disproportionately assigned in any direction.
Content moderation practices during the COVID-19 pandemic have epitomized this new paradigm,
throwing into sharp relief the interest balancing and error choices that platforms make. Platforms cracked down on misinformation in an unprecedented fashion because the harms were judged to be especially great.
They did this despite acknowledging that circumstances meant there would be higher error rates than normal because the costs of moderating inadequately were less than the costs of not moderating at all.
But this apparently exceptional content moderation during the pandemic was only a more exaggerated version of how content moderation works all the time.
What this paradigm shift means for platform governance and its regulation remains undertheorized but is especially important to examine now for two reasons. First, without adapting speech governance to the very different nature of the task being undertaken—systemic balancing instead of individual categorization—platform decisionmaking processes and the rules that govern online speech will continue to be viewed as illegitimate. Because there is no “right” answer to most, if not all, of the questions involved in writing rules for online speech, the rule-formation process is especially important for garnering public acceptance and legitimacy.
Second, regulators around the world are currently writing laws to change the regulatory landscape for online speech. In the United States in particular, the law that “created the internet”
—Section 230 of the Communications Decency Act
—is increasingly under siege across the political spectrum, with its reform seemingly imminent.
But changing the regulatory environment without a proper understanding of content moderation in practice will make the laws ineffective or, worse, create unintended consequences. Regulators need to understand the inherent characteristics of the systems they seek to reform. Regulation that entrenches one right or interest without acknowledging the empirical realities of how online speech operates and is constantly changing, or that adopts a punitive approach focused on individual cases, will fail to bring the accountability that is currently lacking from platforms without necessarily protecting those harmed by their decisions.
This Article therefore offers an account of the role of proportionality and probability in online speech governance and the questions it raises for such governance and its regulation.
This Article concentrates on tech platforms’ role as the current primary rulemakers and enforcers of online content regulation, the focus of a rapidly growing literature.
This is for two reasons. First, content moderation will always go beyond what governments can constitutionally provide for. The First Amendment would not permit laws requiring removal of content like the Christchurch Massacre livestream,
violent animal crush videos,
or graphic pornography,
for example, but few would disagree that platforms should have some license to moderate this content to protect their services from becoming unusable. How far this license should extend may be contested, but it is relatively uncontroversial that private actors can restrict more speech than governments. Second, the scale of online content will make private platforms’ role as the frontline actors in content moderation an ongoing practical necessity. Governments will not have the resources or technical capacity to take over.
As much as platforms are building bureaucracies and norms in a way that can resemble those of governments,
they remain private actors with obvious business interests and are unencumbered by the constraints of public law. The project of online speech governance centers around the question of how to square this triangle
of unaccountable private actors exercising enormous power over systemically important public communication while accepting the constitutional and practical limitations of government regulation. This Article’s contribution to that task is to describe and give a conceptual framework to the radical changes that have occurred in the actual operation of content moderation in the last half decade alone. Part I begins by describing the categorical and individualistic paradigm of early content moderation—what this Article calls its “posts-as-trumps” era—and how this has given way to an era defined by proportionality and probability in online speech governance. This Article argues that this governance based on systemic balancing is both normatively and pragmatically a better fit for the modern realities of online speech. Descriptively, these principles already shape online speech, whether explicitly acknowledged or not. A case study of platform content moderation during the COVID-19 pandemic illustrates this starkly. Part II turns to the questions that governance based on proportionality and probability raises for decisionmakers and shows that the failure to adequately address these has left current governance arrangements fundamentally unstable and unsatisfying.
Part III turns to the urgent project of addressing these deficiencies. This Article argues that, despite first appearances, systemic balancing in online speech governance need not entail a devaluing or deflation of speech rights. In fact, as a methodological approach, it does not demand any particular substantive results and could result in more speech-protective rules. The critical point is that recognizing the role of systemic balancing orients debates around the right questions. This Article therefore turns to what these questions are for both platforms and regulators, and discusses their impact on what content moderation should look like in a post-pandemic world.
Online speech governance is a wicked problem with unenviable and perhaps impossible trade-offs. There is no end-state of content moderation with stable rules or regulatory forms; it will always be a matter of contestation, iteration, and technological evolution. That said, this is an unusual period of disruption and experimentation, as the prevailing forms of internet governance have become inadequate and new systems are emerging to replace them. Understanding what tasks these institutions must be designed to fulfill is the first step to evaluating, improving, and regulating them.