Introduction
What appeals rights, if any, should people have when they are subjected to decision-making by artificial intelligence (AI)?
The right to challenge decisions with significant effects is a core principle of the rule of law.
Yet it is unclear how this principle will fare for significant decisions made or facilitated by AI.
As data collection and storage have become cheaper, processing has become faster, and algorithms have become more complex and more effective at certain tasks, the use of AI in decision-making has increased. The government and private sector now use algorithms to decide how to distribute welfare benefits,
whether to hire or fire a person,
whether expressive material should be removed from online platforms,
whether to keep people in prison,
and more.
The increasing use of AI to aid or substitute for human decision-making raises the question of what, if any, process should be afforded those affected by these decisions.
Machine decision-making can be technically inscrutable and thus difficult to contest; it is likely to become even less scrutable as black-box machine-learning techniques expand.
Humans may exhibit an “automation bias” that creates overconfidence in machine decisions,
and an ensuing bias against challenges to those decisions.
It is unclear how challenges, especially if they come with meaningful process rights, will affect the cost efficiencies that automated decision-making promises to deliver. And if related due process protections such as transparency and notice are implemented badly or not at all, meaningful challenges will not be possible.
In the United States, regulatory proposals directed at algorithmic decision-making have largely ignored calls for individual due process in favor of system-wide regulation aimed at risk mitigation. To the extent there has been convergence among recent U.S. policy proposals, it has been on the need for systemic policy solutions, such as algorithmic impact assessments or auditing, rather than an individual right to contest.
In Europe, by contrast, regulators are taking a more holistic approach to algorithmic decision-making. The European Union’s (EU) General Data Protection Regulation (GDPR), which went into effect in May 2018, establishes a complex set of regulations of algorithmic decision-making that span multiple contexts and sectors.
The GDPR incorporates both systemic governance measures and various individual rights for data subjects: transparency, notice, access, a right to object to processing, and, for those subject to automated decision-making, the right to contest certain decisions.
Likewise, the Council of Europe has articulated a right to contest in its amended data protection convention, known as Convention 108.
The Council of Europe is an international human rights organization that consists of all the EU Member States plus additional non-EU members.
As of now, forty countries have signed on to the amended Convention.
Twelve have ratified it.
The amended Convention states that “[e]very individual shall have a right[] . . . not to be subject to a decision significantly affecting him or her based solely on an automated processing of data without having his or her views taken into consideration.”
In 2020, the Council of Europe adopted recommendations on AI, explaining that individuals should be provided “effective means to contest relevant determinations and decisions.”
The right to contest AI is developing traction outside of Europe, too. The Organisation for Economic Co-operation and Development (OECD), an intergovernmental economic organization focused on stimulating world trade, includes a right to contest in its recommendations on AI.
The OECD’s recommendations have historically formed the basis of data protection laws around the world, and its recommendations on AI are likely to be similarly influential. Brazil’s comprehensive data protection law, enacted in 2018, includes “the right to request a review of decisions taken” by AI.
In November 2020, the Office of the Privacy Commissioner of Canada recommended that Canadian data privacy law be revised to include a right to contest AI decisions.
The proposed amendments to Quebec’s privacy law, Bill 64, include a limited right to contest.
Despite this, few have given attention to the right to contest AI. Although the GDPR’s notice and transparency requirements for AI, especially the so-called “right to explanation,” have attracted a flurry of scholarly analysis,
contestation has not garnered as much attention.
And although the right to contest is clearly established in the GDPR, regulators have yet to give meaningful guidance on what the right is or how it should be implemented.
This Article takes on the right to contest, both descriptively and normatively. It seeks to fill the gap in commentary and bridge the U.S. and EU conversations. This Article is the first to examine at length this right and its content, and the first to provide an in-depth analysis of the GDPR right to contestation for a U.S. audience.
This Article investigates a central question about regulating algorithmic decision-making: Should there be a right to contest AI decisions? In investigating this question, this Article uncovers and fills a substantial gap in the literature: the lack of a theoretical scaffolding for discussing contestation models for privatized process at speed and at scale. This Article probes this question theoretically, considering reasons frequently given for establishing individual due process rights, and comparatively, through in-depth case studies of existing contestation systems. Ultimately, we find merit in the possibility of establishing a right to contest AI, including where decisions are made by private actors, to further due process values. We consider how to design an effective right.
Part I introduces some of the challenges algorithmic decision-making presents and how they might relate to contestation. Part II turns to whether a right to contest AI decision-making can find theoretical purchase. Part III looks to models for contestation, establishing four contestation archetypes and examining them in action through comparative case studies. It considers the GDPR’s right to contest and Member State implementations of it, the Digital Millennium Copyright Act’s “notice-and-takedown” scheme for online copyright infringement, the Fair Credit Billing Act’s contestation scheme for credit card charges, and the EU’s so-called “right to be forgotten.” Part IV integrates the findings from these investigations and develops normative and practical guidance for designing a right to contest AI.