DISTINGUISHING PRIVACY LAW: A CRITIQUE OF PRIVACY AS SOCIAL TAXONOMY

DISTINGUISHING PRIVACY LAW: A CRITIQUE OF PRIVACY AS SOCIAL TAXONOMY

What distinguishes privacy violations from other harms? This has proven a surprisingly difficult question to answer. For over a century, privacy law scholars labored to define the elusive concept of privacy. Then they gave up. Efforts to distinguish privacy were superseded at the turn of the millennium by a new approach: a taxonomy of privacy problems grounded in social recognition. Privacy law became the field that simply studies whatever courts or scholars talk about as related to privacy.

Decades into privacy as social taxonomy, the field has expanded to encompass a broad range of information-based harms—from consumer manipulation to algorithmic bias—generating many rich insights. Yet this approach has come at a cost. This Essay diagnoses the pathologies of a field that has abandoned defining its core subject matter and offers a research agenda for privacy in the aftermath of social recognition.

Our critique is overdue. It is past time to think anew about exactly what work the concept of privacy is doing in a complex information environment and why a given societal problem—from discrimination to mis information—is worthy of study under a privacy framework. Only then can privacy scholars articulate what we are expert in and participate meaningfully in global policy discussions about how best to govern information-based harms.

The full text of this Essay can be found by clicking the PDF link to the left.

Introduction

A police drone peers through a second-story apartment window to inspect whether an armed robbery suspect is there. 1 See Cindy Chang, LAPD Deploys Controversial Drone for the First Time, L.A. Times (Jan. 15, 2019), https://www.latimes.com/local/lanow/la-me-lapd-drone-20190115-story.html (on file with the Columbia Law Review). Facebook withholds advertising for financial services from older users and female users. 2 See Jonathan Stempel, Facebook Sued for Age, Gender Bias in Financial Services Ads, Reuters (Oct. 31, 2019), https://www.reuters.com/article/us-facebook-lawsuit-bias/‌facebook-sued-for-age-gender-bias-in-financial-services-ads-idUSKBN1XA2G8 [https://perma.cc/6GUT-VGJ7]. A consumer is tricked into sharing more personal information than they intended. 3 See Alicia Adamczyk, These Are the ‘Potentially Unlawful’ Tactics Retailers Use to Trick Customers Into Spending More Money, CNBC (Nov. 27, 2019), https://www.cnbc.com/2019/‌11/27/how-retailers-trick-customers-into-buying-more.html [https://perma.cc/F3ZL-XU5F]. A family living in a predominantly Asian American neighborhood is charged a higher price for SAT test preparation. 4 See Julia Angwin, Surya Mattu & Jeff Larson, The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price From Princeton Review, ProPublica (Sept. 1, 2015), https://www.propublica.org/article/asians-nearly-twice-as-likely-to-get-higher-price-from-princeton-review [https://perma.cc/588H-PEHK]. Farm robots outfitted with cameras and data processors collect and crunch data to optimize farming. 5 Amanda Little, Opinion, Farm Robots Will Help Feed the World During Climate Change, Bloomberg L. (June 2, 2022), https://www.bloomberglaw.com/bloombergterminalnews/bloomberg-terminal-news/RCUO2CDWX2QK (on file with the Columbia Law Review). A pregnancy-tracking app grants pregnant users’ employers a royalty-free license to mine their de-identified personal information. 6 Drew Harwell, Is Your Pregnancy App Sharing Your Intimate Data With Your Boss?, Wash. Post (Apr. 10, 2019), https://www.washingtonpost.com/technology/2019/04/10/‌tracking-your-pregnancy-an-app-may-be-more-public-than-you-think/ (on file with the Columbia Law Review). A renter is denied an apartment after the screening company’s automated background check system incorrectly pulls in criminal records for women with different middle names, races, and birth dates. 7 Lauren Kirchner & Matthew Goldstein, How Automated Background Checks Freeze Out Renters, N.Y. Times (May 28, 2020), https://www.nytimes.com/2020/05/28/‌
business/renters-background-checks.html (on file with the Columbia Law Review).

Each of these scenarios, and countless others, have been recognized as problems involving “privacy.” Are they? This vibrant, interdisciplinary field with decades of history possesses no real sense of what constitutes a privacy problem and what does not. Though these scenarios implicate different values and arise from different contexts, none would be out of place at a privacy law conference. Yet other types of information-based harms—TikTok users sharing a fake screenshot of a nonexistent CNN headline suggesting that climate change is seasonal, 8 Tiffany Hsu, Worries Grow that TikTok Is New Home for Manipulated Video and Photos, N.Y. Times (Nov. 4, 2022), https://www.nytimes.com/2022/11/04/technology/‌tiktok-deepfakes-disinformation.html (on file with the Columbia Law Review). for example—would be out of place. Why? No one can say.

Throughout the twentieth century, scholars sought to define and distinguish the concept of privacy. A parade of articles and books, from The Right to Privacy onward, offered varying definitions for this elusive idea. 9 See infra Part I. Privacy amounts to a right “to be let alone,” 10 Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 195 (1890) (internal quotation marks omitted) (quoting Thomas M. Cooley, A Treatise on the Law of Torts or the Wrongs Which Arise Independent of Contract 29 (2d ed. 1888)). these works argued, or “the control we have over information about ourselves.” 11 Charles Fried, Privacy, 77 Yale L.J. 475, 482 (1968) (emphasis omitted). Privacy involves access to the self or self-determination. 12 See, e.g., Anita L. Allen, Uneasy Access: Privacy for Women in a Free Society 13–17 (1988) [hereinafter Allen, Uneasy Access] (“[P]ersonal privacy is a condition of inaccessibility of the person, his or her mental states, or information about the person to the senses or surveillance devices of others.”); Edward J. Eberle, Human Dignity, Privacy, and Personality in German and American Constitutional Law, 1997 Utah L. Rev. 963, 1000 (defining “informational self-determination” as a conception of privacy that seeks to “preserve the integrity of human personality against the onslaught of the technological age and of prying eyes”); Ruth Gavison, Privacy and the Limits of Law, 89 Yale L.J. 421, 423 (1980) [hereinafter Gavison, Privacy and the Limits of Law] (arguing that privacy “is related to our concern over our accessibility to others”). Over one hundred years of debating solitude 13 See Gabriel García Márquez, One Hundred Years of Solitude (Gregory Rabassa trans., Harper & Row 1970). yielded no universally agreed-upon definition. But that did little to deter privacy scholars from trying.

At the turn of the millennium, a new voice arose that would come to shape the field of American privacy scholarship for decades. In a series of articles and books, Professor Daniel J. Solove dismissed attempts to define privacy as invariably over- or underinclusive. 14 See Daniel J. Solove, Understanding Privacy 8 (2008) (criticizing privacy theories that characterize privacy as a “unitary concept with a uniform value that is unvarying across different situations” and explaining that the “attempt to locate the ‘essential’ or ‘core’ characteristics of privacy has led to failure”); Daniel J. Solove, Conceptualizing Privacy, 90 Calif. L. Rev. 1087, 1124 (2002) [hereinafter Solove, Conceptualizing Privacy] (arguing that settling on any of the six common conceptions of privacy described in the article would result in “either a reductive or an overly broad account of privacy”); Daniel J. Solove, A Taxonomy of Privacy, 154 U. Pa. L. Rev. 477, 485–86 (2006) [hereinafter Solove, Taxonomy of Privacy] (claiming that attempts to find a single essence of privacy are usually “too broad and vague”). Embracing a pragmatism similar to that of Justice Oliver Wendell Holmes, Jr., 15 See Louis Menand, The Metaphysical Club: A Story of Ideas in America 339–47 (2001) (describing Holmes’s theory of the law, which claimed that decisions are fundamentally dictated by experience, not formal doctrinal logic). Solove exhorted the field to abandon the quixotic quest to attach a single definition to privacy. 16 See Solove, Taxonomy of Privacy, supra note 14, at 481–82 (arguing for a framework to evaluate privacy issues based on specific harmful activities instead of defaulting to a single definition that is too vague to be useful for effective policymaking and lawmaking). In its place, Solove offered a taxonomy of “the specific activities that pose privacy problems,” a loosely correlated set of concerns and concepts that have come to be associated with privacy in its many forms. 17 Id. at 482, 489–91.

The taxonomizing of privacy was not without precedent. Professor William Prosser famously distilled four privacy torts from decades of case law, 18 See William L. Prosser, Privacy, 48 Calif. L. Rev. 383, 389 (1960) (proposing “four distinct kinds of invasion of four different interests of the plaintiff, which are tied together by the common name, but otherwise have almost nothing in common except that each rep-resents an interference with the right of the plaintiff . . . ‘to be let alone’” (quoting Thomas M. Cooley, A Treatise on the Law of Torts or the Wrongs Which Arise Independent of Contract 29 (2d ed. 1888))). and Professor Alan Westin compiled a taxonomy of privacy attitudes. 19 See Alan F. Westin, Privacy and Freedom 31–32 (1967) (identifying four psychological conditions or states of individual privacy). Nor has the taxonomy of privacy entirely evaded critique. 20 See, e.g., Jeffrey Bellin, Pure Privacy, 116 Nw. U. L. Rev. 463, 465–68 (2021) (listing the drawbacks of not being able to define the term “privacy”); M. Ryan Calo, The Boundaries of Privacy Harm, 86 Ind. L.J. 1131, 1140–42 (2011) [hereinafter Calo, Boundaries of Privacy Harm] (highlighting the limitations of the taxonomic approach and the pressing need for principles that delimit privacy harm); David E. Pozen, Privacy–Privacy Tradeoffs, 83 U. Chi. L. Rev. 221, 226–27 (2016) (pointing out how the “capaciousness” of Solove’s taxonomic approach “exacerbates the dilemma of privacy-privacy tradeoffs”). But Solove’s specific rejection of privacy conceptualization in favor of a taxonomic approach continues to exert a profound influence on the shape of contemporary privacy scholarship. As Professor Woodrow Hartzog recently explained, abandoning definition in favor of taxonomy helped breathe new life into the field. 21 See Woodrow Hartzog, What Is Privacy? That’s the Wrong Question, 88 U. Chi. L. Rev. 1677, 1687 (2021) (“By getting us past the threshold question of what privacy is, Solove’s work provides room for scholars and lawmakers to tackle bigger phenomena . . . .”). Unburdened by a need to define privacy, the past two decades have seen a Cambrian explosion in the arguments and issues at the heart of mainstream privacy scholarship.

This Essay argues that the long-dominant social-taxonomic approach to privacy and privacy law is no longer serving the field. There are several important reasons why. First, social recognition alone is not—and never has been—a sufficient criterion for what counts as a privacy problem. Instead of comparing an information-based harm to a set definition of a privacy harm, the taxonomic approach asks whether the right people or institutions—typically courts, public officials, and established scholars—talk about the harm as involving privacy. 22 See Calo, Boundaries of Privacy Harm, supra note 20, at 1141 (“Solove’s criteria for inclusion involve recognition by the right sorts of authorities.”). In and of itself, this approach raises critical questions about authority, legitimacy, and whose voices should be heard and valued when it comes to identifying new privacy harms.

The social-taxonomic approach also omits, and arguably impedes, the development of a sophisticated framework for interrogating the tension between the various values under the privacy umbrella. For example, many free speech scholars see privacy as an impediment to self-expression. 23 See, e.g., Solveig Singleton, Privacy Versus the First Amendment: A Skeptical Approach, 11 Fordham Intell. Prop. Media & Ent. L.J. 97, 97 (2000) (“The courts should think twice before sacrificing the mature law of free speech to the less coherent concerns about privacy.”); Eugene Volokh, Freedom of Speech and Information Privacy: The Troubling Implications of a Right to Stop People From Speaking About You, 52 Stan. L. Rev. 1049, 1051 (2000) (“While privacy protection secured by contract is constitutionally sound, broader information privacy rules are not easily defensible under existing free speech law.”). Other scholars in the critical tradition have explored how privacy is deployed as cover for subordination. 24 See, e.g., Catharine A. MacKinnon, Privacy v. Equality: Beyond Roe v. Wade, in Feminism Unmodified 93, 102 (1987) [hereinafter MacKinnon, Privacy v. Equality] (describing the right to privacy as “a right of men ‘to be let alone’ to oppress women one at a time” (footnote omitted) (quoting Warren & Brandeis, supra note 10, at 205)); Lucinda M. Finley, Transcending Equality Theory: A Way Out of the Maternity and the Workplace Debate, 86 Colum. L. Rev. 1118, 1119 (1986) (“The notion that the world of remunerative work and the world of home—or the realms of production and reproduction—are separate, has fostered the economic and social subordination of women . . . .”); Elizabeth M. Schneider, The Violence of Privacy, 23 Conn. L. Rev. 973, 975 (1991) [hereinafter Schneider, Violence of Privacy] (“The notion of marital privacy has been a source of oppression to battered women and has helped to maintain women’s subordination within the family.”). And a decade or more of work in algorithmic accountability illustrates the tension between privacy and antidiscrimination or fairness. 25 See, e.g., Roger Allan Ford & W. Nicholson Price II, Privacy and Accountability in Black-Box Medicine, 23 Mich. Telecomms. & Tech. L. Rev. 1, 4 (2016) (“The solution to the accountability problem is to validate black-box models, but that requires access to more information, which can exacerbate the privacy problem. And the solution to the privacy problem is to limit [information] . . . but that can make it harder to validate models and easier to hide . . . problems.”); Finale Doshi-Velez & Mason Kortz, Accountability of AI Under the Law: The Role of Explanation 10 (2017), https://dash.harvard.edu/bitstream/‌handle/1/34372584/2017-11_aiexplainability-1.pdf [https://perma.cc/E7NE-3E9C] (unpublished working paper) (“AI systems do not automatically store information about their decisions. . . . [U]nlike human decision-makers, AI systems can delete information to optimize their data storage and protect privacy. However, an AI system designed this way would not be able to generate ex post explanations the way a human can.”). Yet this expansive, criteria-free approach to privacy has come to fold in information-based threats to self-expression, antisubordination, and fairness as core privacy concerns. 26 See infra section I.C. The result is a proliferation of vexing “privacy–privacy tradeoffs” 27 See Pozen, supra note 20, at 222. with little hope of reconciliation.

Situating privacy law within the broader structure of information-based power has become a critical task for scholars and policymakers alike. American privacy law scholarship has yet to even reconcile the basic distinction between privacy and data protection, 28 See infra notes 285–294 and accompanying text. let alone the new modes of information governance that European and other societies are exploring today. 29 See infra notes 295–300 and accompanying text. Distinguishing privacy from data protection, content moderation, or antidiscrimination law would shed light on the precise goals societies are trying to meet, the range of approaches that exist to meet them, and the institutions best suited to address these issues. The FTC, for example, may be better positioned to address violations of information privacy, whereas the DOJ Civil Rights Division is better versed in antidiscrimination law. But to this day, the European Union has not recognized any American federal agency as a data-protection authority, in part because our information governance goals are imprecise and scattered haphazardly across various institutions and agencies. As a result, only recently have the United States and the European Union agreed on a privacy framework to share data. 30 See 2023 O.J. (C 4745).

It is imperative that we try to understand what work the concept of privacy is doing in today’s complex information environment. As it happens, some of the leading and emerging lights in privacy law scholarship are beginning to disentangle privacy from other information-based values, reminding the field just what we are experts in. 31 See, e.g., Julie E. Cohen, What Privacy Is For, 126 Harv. L. Rev. 1904, 1905 (2013) [hereinafter Cohen, What Privacy Is For] (arguing that privacy is not a legal protection for the liberal self but instead a fundamental tool for protecting “the situated practices of boundary management through which the capacity for self-determination develops”); Cynthia Dwork & Deirdre K. Mulligan, It’s Not Privacy, and It’s Not Fair, 66 Stan. L. Rev. Online 35, 36 (2013), https://review.law.stanford.edu/wp-content/uploads/sites/3/2016/‌08/DworkMullliganSLR.pdf [https://perma.cc/Q8JA-463Q] (“Regrettably, privacy controls and increased transparency fail to address concerns with the classifications and segmentation produced by big data analysis.”); Paul Schwartz, Data Processing and Government Administration: The Failure of the American Legal Response to the Computer, 43 Hastings L.J. 1321, 1343–52 (1992) [hereinafter Schwartz, Data Processing] (discussing the weaknesses of the “privacy” paradigm and proposing instead to talk about bureaucratic justice and human autonomy); Salomé Viljoen, A Relational Theory of Data Governance, 131 Yale L.J. 573, 578 (2021) (critiquing privacy law’s individualism, which fails to address data’s population-level relational effects); Tal Z. Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries L. 157, 161–68 (2019) [hereinafter Zarsky, Privacy and Manipulation] (suggesting that manipulation-based arguments are preferable to privacy theories, which are plagued with substantial theoretical shortcomings and pitfalls). The time has come to leverage this literature in service of a new direction for the field.

The Essay proceeds as follows. Part I traces the efforts of twentieth-century privacy scholars to define our subject matter, culminating in Solove’s intervention in the early 2000s, and acknowledges the generative role of privacy’s taxonomy paradigm. Part II argues that social recognition has always been a flawed means by which to distinguish privacy and that privacy as taxonomy stands in the way of identifying, reconciling, and distinguishing privacy harms in a diverse and complex information environment. Section II.A discusses information-based harms that privacy law was late to recognize, such as information-based discrimination and algorithmic manipulation. Section II.B discusses unresolved tensions between and among privacy and other values.

Part III outlines a post-taxonomy research agenda for privacy law, one that decouples classification from social recognition, foregrounds the role of reflexivity, and begins to answer the deep question of just what work privacy is doing in the context of information-based harms. Misinformation, hate speech, bias, data sovereignty, labor extraction, and many other contemporary concerns implicate or involve privacy but sound in different values altogether. By uncritically broadening the concept of privacy, most Americans are missing out on a global conversation around data protection, information governance, and harm mitigation. Only by distinguishing privacy can privacy law reach its full potential as a discipline and a body of law.