Introduction
When a Wisconsin circuit court sentenced Eric Loomis to six years of initial confinement and five years of extended supervision, it did so based on three bar charts, measured on a scale from one to ten.
These charts were generated by the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool, a risk-assessment algorithm that provides “decisional support” to courts determining bail, parole, and sentencing outcomes.
COMPAS concluded that Mr. Loomis posed a “high risk to the community”;
in light of that judgment, the circuit court denied Mr. Loomis parole.
Mr. Loomis suspected that COMPAS impermissibly considered his gender
and incorrectly assessed his “risk” given that the program was not designed as a sentencing tool.
But trade secret law, the common law doctrine that protects the secrecy of commercial information,
barred Mr. Loomis from viewing COMPAS’s source code and confirming his suspicions.
Mr. Loomis appealed his sentence on the grounds that the secrecy surrounding COMPAS violated his due process rights by undermining his right to raise an effective defense and challenge the validity of his accusers’ technology.
Despite the heavy liberty interests at stake, the Wisconsin Supreme Court determined that COMPAS was a protected trade secret and refused to grant Mr. Loomis access to the algorithm.
This weaponization of trade secret law to conceal algorithms in criminal proceedings denies defendants like Mr. Loomis their right to present a complete and effective defense against their accusers.
Courts increasingly rely on automated decisionmaking to inform their judgments
even though these technologies come with significant risks.
Algorithms produce inaccurate
or discriminatory
outcomes when developers build them on datasets or features that incorporate bias.
In the criminal legal setting, the consequences are severe: Algorithmic errors generate overly punitive bail, sentencing, or incarceration outcomes that disproportionately harm racial and gender minorities.
Given the absence of uniform regulation over data collection and algorithmic training,
individuals like Mr. Loomis often stand as the last line of defense to detect the inaccuracies of programs deployed against them. But when trade secret law allows developers to block defendants from reviewing their code’s accuracy and methodology, the risks of algorithmic error and discrimination abound.
Without access to source code, individuals like Mr. Loomis cannot challenge the scientific validity of sentencing algorithms or present an effective defense against their accusers.
The current state of trade secret law lets corporations conceal their algorithms to the detriment of people in the criminal legal system.
But the doctrine has not always been this way. While modern courts broadly seclude algorithmic information,
early courts narrowly protected secret inventions to encourage greater innovation than would otherwise exist in an unregulated market.
In fact, trade secret law first articulated principles of restraint: Courts were to protect secret ideas and inventions just enough to incentivize innovation and creation but not so much as to award intellectual monopolies and stifle competition.
Given this misalignment with early policy objectives, courts and scholars alike must reassess the propriety of extending trade secret protection to algorithmic information. Part I reviews the origins of trade secret law to clarify the first principles that shaped the doctrine. Rather than conceal proprietary information, early trade secret law sought to promote a public domain of ideas on which market actors could fairly compete and innovate. Part II examines how trade secret protection of “ancillary information”
contravenes those principles by (1) secluding non-trade-secret information about algorithmic development and performance and (2) restricting competition.
Part III proposes a novel framework that redefines the scope of trade secret protection in the algorithmic context and revives trade secret law’s early policy objectives. This Note concludes that while algorithms themselves constitute protectable trade secrets, ancillary information—such as training data, performance statistics, or descriptions of the software’s methodology—does not. The disclosure of ancillary information comports with first principles and public demands for algorithmic transparency while maintaining trade secret holders’ proprietary interests.