Introduction
In 2015, LinkedIn settled for $13 million with users who, after signing up for LinkedIn’s “Add Connections” feature, were dismayed to learn that LinkedIn had sent unwanted emails to their address book contacts on their behalf.
These LinkedIn users had agreed to send an initial email inviting their professional contacts to connect, but what they didn’t know was that LinkedIn would send up to two reminder emails to each contact.
Contacts on the receiving end of these reminder emails had virtually no way to opt out of reminders.
This is one of the more notorious cases of companies using “dishonest design”—also known as “dark patterns”—to trick or push consumers into “doing things they don’t really want to do.”
But even though most dark patterns don’t make headlines or result in multimillion dollar settlements, they significantly impact consumers’ online experiences because they are everywhere.
Many of the basic tactics and strategies underlying dark patterns are neither new nor unique to the online context. Before the advent of the internet, salespeople and marketing professionals had long wielded persuasion, coercion, and even manipulation with great effect.
What makes these practices particularly concerning in the digital context, however, is their scale: Online platforms can reach millions of consumers within seconds through targeted advertisements, and companies can use automated tools to spam consumers with marketing emails.
Companies’ incentives are not always aligned with consumers’ best interests or preferences, and design is a potent tool for companies
to shape consumers’ digital experiences and influence their behavior.
For example, companies that mediate consumers’ online social interactions have “overwhelming incentives to design technologies in a way that maximizes the collection, use, and disclosure of personal information.”
Some scholars have argued that design should play a bigger role in privacy law, which has tended to focus more on data collection, use, and distribution.
After all, design conveys signals to consumers, affects the transaction costs of their online activities,
and affects their perceptions.
As Professor Woodrow Hartzog remarks, “Design is everything . . . . [D]esign is power.”
Scholarship on dark patterns has focused on developing a taxonomy and definitions for different types of dark patterns, conducting empirical research to better understand the effectiveness of dark patterns, and broadly surveying the legal and regulatory landscape for theories, existing and new, through which to curb these practices—categories of dark patterns ranging from the merely troubling to the clearly manipulative.
Scholars and researchers have already identified and recognized “nagging”—online design practices that create persistent interactions with users and may eventually compel them to do things that they wouldn’t necessarily have done—as one of many categories of dark patterns. This Note contributes to existing legal scholarship by offering a deep dive into the nagging category of dark patterns, particularly the unique legal issues that the practice raises.
This Note argues for the regulation of the nagging category of dark patterns and proposes a “do not nag” feature, modeled after the federal “do not call” list, as a solution. While the FTC has started to use its section 5 “unfair or deceptive” authority to combat some types of dark patterns, particularly practices that mislead consumers, nagging practices are especially elusive—but just as insidious as the more commonly discussed dark patterns. Part I of this Note defines the nagging category of dark patterns and argues that nagging practices are harmful to consumers and warrant timely intervention. In particular, section I.B identifies both the direct and indirect harms that nagging poses to consumers. Part II provides an overview of recent legislative and regulatory responses to dark patterns more generally and explains why existing consumer protection legal frameworks, though likely capable of addressing most other categories of dark patterns, will be ineffective at addressing nagging. Section III.A proposes a “do not nag” feature as a solution to the unique nagging problem, drawing on lessons learned from the “do not call” registry and the (ultimately unsuccessful) “do not track” movement. Section III.B further explores how a “do not nag” feature will survive First Amendment scrutiny and engages with other critiques—that it places too heavy of a burden on consumers and could have unintended consequences—that this solution may face.