THE WEAPONIZATION OF TRADE SECRET LAW

THE WEAPONIZATION OF TRADE SECRET LAW

In criminal proceedings, courts are increasingly relying on automated decisionmaking tools that purport to measure the likelihood that a defendant will reoffend. But these technologies come with considerable risk; when trained on datasets or features that incorporate bias, criminal legal algorithms threaten to replicate discriminatory outcomes and produce overly punitive bail, sentencing, and incarceration decisions. Because regulators have failed to establish systems that manage the quality of data collection and algorithmic training, defendants and public interest groups often stand as the last line of defense to detect algorithmic error. But developers routinely call upon trade secret law, the common law doctrine that protects the secrecy of commercial information, to bar impacted stakeholders from accessing potentially biased software.

This weaponization of trade secret law to conceal algorithms in criminal proceedings denies defendants their right to present a complete and effective defense. Furthermore, the practice contravenes the early policy objectives of trade secret law that sought to promote a public domain of ideas on which market actors could fairly compete and innovate. To remedy this misalignment, this Note proposes a novel framework that redefines the scope of trade secret protection and revives the first principles underlying the doctrine. It concludes that while algorithms themselves constitute protectable trade secrets, information ancillary to the algorithm—such as training data, performance statistics, or descriptions of the software’s methodology—do not. Access to ancillary information protects accused parties’ right to defend their liberty and promotes algorithmic fairness while aligning trade secret law with its first principles.

The full text of this Note can be found by clicking the PDF link to the left.

Introduction

When a Wisconsin circuit court sentenced Eric Loomis to six years of initial confinement and five years of extended supervision, it did so based on three bar charts, measured on a scale from one to ten. 1 See Petition for Writ of Certiorari at 3–4, Loomis v. Wisconsin, 137 S. Ct. 2290 (2017) (No. 16-6387) (noting that “the State and the trial court referenced the COMPAS assessment and used it as a basis for incarcerating Mr. Loomis” and “COMPAS is in the form of a bar chart . . . on a scale of one to ten”). These charts were generated by the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool, a risk-assessment algorithm that provides “decisional support” to courts determining bail, parole, and sentencing outcomes. 2 State v. Loomis, 881 N.W.2d 749, 754 (Wis. 2016); see also State v. Loomis, No. 2015AP157-CR, 2015 WL 5446731, at *1 n.2 (Wis. Ct. App. Sept. 17, 2015) (describing the court’s reliance on COMPAS to “make decisions about prison incarceration versus community supervision[] [and] to make decisions about bond”). COMPAS concluded that Mr. Loomis posed a “high risk to the community”; 3 For a discussion of the circuit court’s analysis of Loomis’s COMPAS score in sentencing, see Loomis, 881 N.W.2d at 755 (“You’re identified, through the COMPAS assessment, as an individual who is at high risk to the community.” (quoting Loomis, 2014 WL 5446731, at *1)). in light of that judgment, the circuit court denied Mr. Loomis parole. 4 See id. (“In terms of weighing the various factors, I’m ruling out probation because of the seriousness of the crime and because your history, your history on supervision, and the risk assessment tools that have been utilized, suggest that you’re extremely high risk to re-offend.” (internal quotation marks omitted) (quoting the circuit court’s opinion)). Mr. Loomis suspected that COMPAS impermissibly considered his gender 5 See Loomis, 2015 WL 5446731, at *3 (certifying to the Wisconsin Supreme Court the question of “whether a sentencing court’s reliance on a COMPAS assessment runs afoul of Harris’s prohibition on gender-based sentencing” (cleaned up)). and incorrectly assessed his “risk” given that the program was not designed as a sentencing tool. 6 Id. at 2 (“Loomis asserts that COMPAS assessments were developed for use in allocating corrections resources and targeting offenders’ programming needs, not for the purpose of determining sentence.”). But trade secret law, the common law doctrine that protects the secrecy of commercial infor­mation, 7 E.g., Amy Kapczynski, The Public History of Trade Secrets, 55 U.C. Davis L. Rev. 1367, 1380 (2022) (explaining how modern applications of trade secret law protect “all commercially valuable business secrets” from wrongful acquisition, use, or disclosure by third parties). barred Mr. Loomis from viewing COMPAS’s source code and confirming his suspicions. 8 The state did not dispute Loomis’s assertions that “the company that developed and owns COMPAS maintains as proprietary the underlying methodology that produces assessment scores” and that “the courts are relying on ‘a secret non-transparent process.’” Loomis, 2015 WL 5446731, at *2. Mr. Loomis appealed his sentence on the grounds that the secrecy surrounding COMPAS violated his due process rights by undermining his right to raise an effective defense and challenge the validity of his accusers’ technology. 9 Id. at *1 (certifying to the Wisconsin Supreme Court the question of “whether this practice violates a defendant’s right to due process, either because the proprietary nature of COMPAS prevents defendants from challenging the COMPAS assessment’s scientific validity, or because COMPAS assessments take gender into account”). Despite the heavy liberty interests at stake, the Wisconsin Supreme Court determined that COMPAS was a protected trade secret and refused to grant Mr. Loomis access to the algorithm. 10 State v. Loomis, 881 N.W.2d 749, 761 (Wis. 2016) (finding that COMPAS was “a proprietary instrument and a trade secret”). The U.S. Supreme Court denied Mr. Loomis’s petition for writ of certiorari. See Loomis v. Wisconsin, 137 S. Ct. 2290, 2290 (2017).

This weaponization of trade secret law to conceal algorithms in criminal proceedings denies defendants like Mr. Loomis their right to present a complete and effective defense against their accusers. 11 See, e.g., State v. Pickett, 246 A.3d 279, 299 (N.J. Super. Ct. App. Div. 2021) (“[A] criminal trial where the defendant does not have ‘access to the raw materials integral to the building of an effective defense’ is fundamentally unfair.” (quoting State ex rel. A.B., 99 A.3d 782, 790 (N.J. 2014))). Courts increasingly rely on automated decisionmaking to inform their judg­ments 12 Courts often consider algorithmic predictions about the likelihood that a defendant may one day reoffend. Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343, 1347–48 (2018) [hereinafter Wexler, Life, Liberty, and Trade Secrets] (describing how “judges and parole boards rely on risk assessment instruments, which purport to predict an individual’s future behavior, to decide who will make bail or parole and even what sentence to impose”). even though these technologies come with significant risks. 13 Ziad Obermeyer, Brian Powers, Christine Vogeli & Sendhil Mullainathan, Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, 366 Science 447, 447 (2019) (“There is growing concern that algorithms may reproduce racial and gender disparities via the people building them or through the data used to train them.” (citations omitted)). Algorithms produce inaccurate 14 See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1256 (2008) (describing state-administered algorithms that “issued hundreds of thousands of incorrect Medicaid, food stamp, and welfare eligibility determinations and benefit calculations”). or discriminatory 15 See, e.g., Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem, 93 Wash. L. Rev. 579, 601 (2018) (describing how a criminal legal algorithm was twice as likely to misclassify Black defendants as posing a high risk for reoffending relative to white defendants). outcomes when developers build them on datasets or features that incorporate bias. 16 Biased datasets reproduce racial and gender disparities. See id. at 592 (describing how “training data infused with implicit bias can result in skewed datasets that fuel both false positives and false negatives”). Programmers train algorithms to perform a specified task (e.g., prediction or pattern recognition) by exposing the system to an input dataset and providing select examples of model decisionmaking. See M. I. Jordan & T. M. Mitchell, Machine Learning: Trends, Perspectives, and Prospects, 349 Science 255, 255 (2015) (describing how a programmer may develop a machine learning algorithm by “showing it examples of desired input-output behavior”); Levendowski, supra note 15, at 591 (explaining how developers train artificial intelligence systems by providing an “example” of decisionmaking and exposing the system to other “variations” from which it learns to make comparable decisions). From these examples, the algorithm learns to detect certain patterns or rules that guide future automated assessments. Harry Surden, Machine Learning and Law, 89 Wash. L. Rev. 87, 91 (2014). In the criminal legal setting, the consequences are severe: Algorithmic errors generate overly punitive bail, sentencing, or incarceration outcomes that disproportionately harm racial and gender minorities. 17 Andrea Roth, Trial by Machine, 104 Geo. L.J. 1245, 1270 (2016) (describing the risk of “illegitimate or illegal discrimination” among algorithms that influence bail, testimony, verdicts, and sentencing in criminal trials (internal quotation marks omitted) (quoting Omer Tene & Jules Polonetsky, Judged by the Tin Man: Individual Rights in the Age of Big Data, 11 J. on Telecomm. & High Tech. L. 351, 358 (2013))). Given the absence of uniform regulation over data collection and algorithmic training, 18 See François Candelon, Rodolphe Charme di Carlo, Midas De Bondt & Theodoros Evgeniou, AI Regulation Is Coming, Harv. Bus. Rev., Sept.–Oct. 2021, at 102, 106 (“In dealing with biased outcomes, regulators have mostly fallen back on standard antidiscrimination legislation. That’s workable as long as there are people who can be held responsible for problematic decisions. But with AI increasingly in the mix, individual accountability is undermined.”); Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Cass R. Sunstein, Discrimination in the Age of Algorithms, J. Legal Analysis, 2018, at 1, 2 (suggesting that the lack of regulatory oversight over algorithms may exacerbate efforts to detect discrimination). individuals like Mr. Loomis often stand as the last line of defense to detect the inaccuracies of programs deployed against them. But when trade secret law allows developers to block defendants from reviewing their code’s accuracy and methodology, the risks of algorithmic error and discrimination abound. 19 Sonia K. Katyal, The Paradox of Source Code Secrecy, 104 Cornell L. Rev. 1183, 1248 (2019) [hereinafter Katyal, The Paradox of Source Code Secrecy] (“[A]ssertions of trade secret protection . . . remain a key obstacle for researchers and litigants seeking to test the efficacy and fairness of government algorithms and automated decision making.”). Without access to source code, individuals like Mr. Loomis cannot challenge the scientific validity of sentencing algo­rithms or present an effective defense against their accusers. 20 See State v. Pickett, 246 A.3d 279, 301 (N.J. Super. Ct. App. Div. 2021) (arguing that defendants have a “competing and powerful” interest in forensic software used to incriminate them and that “shrouding the source code and related documents in a curtain of secrecy substantially hinders defendant’s opportunity to meaningfully challenge reliability”).

The current state of trade secret law lets corporations conceal their algorithms to the detriment of people in the criminal legal system. 21 Rebecca Wexler, It’s Time to End the Trade Secret Evidentiary Privilege Among Forensic Algorithm Vendors, Brookings Inst. (July 13, 2021), https://www.brookings.edu/blog/techtank/2021/07/13/its-time-to-end-the-trade-secret-evidentiary-privilege-among-forensic-algorithm-vendors/ [https://perma.cc/M967-3T7R] (“Developers who sell or license forensic algorithms to law enforcement routinely claim that they have a special trade secret entitlement to entirely withhold relevant evidence about how these systems work from criminal defense expert witnesses.”). But the doctrine has not always been this way. While modern courts broadly seclude algorithmic information, 22 See, e.g., Q-Co Indus. v. Hoffman, 625 F. Supp. 608, 617 (S.D.N.Y. 1985) (“Computer software, or programs, are clearly protectible under the rubric of trade secrets . . . .”). early courts narrowly protected secret inventions to encourage greater innovation than would otherwise exist in an unregulated market. 23 See Adam D. Moore, A Lockean Theory of Intellectual Property, 21 Hamline L. Rev. 65, 65 (1997) (“In order to enlarge the public domain, permanently society protects certain private domains temporarily.”). In fact, trade secret law first articulated principles of restraint: Courts were to protect secret ideas and inventions just enough to incentivize innovation and creation but not so much as to award intellectual monopolies and stifle competition. 24 See infra section I.A. for a discussion of trade secret law’s limited scope.

Given this misalignment with early policy objectives, courts and scholars alike must reassess the propriety of extending trade secret protection to algorithmic information. Part I reviews the origins of trade secret law to clarify the first principles that shaped the doctrine. Rather than conceal proprietary information, early trade secret law sought to promote a public domain of ideas on which market actors could fairly compete and innovate. Part II examines how trade secret protection of “ancillary information” 25 This Note adopts the term “ancillary information” to describe nonprotected materials related to protected algorithms. For a more detailed explanation of ancillary information, see infra notes 132–136 and accompanying text. contravenes those principles by (1) secluding non-trade-secret information about algorithmic development and perfor­mance and (2) restricting competition. 26 See infra section II.D (discussing how secluding information on algorithmic methodology and performance limits efforts to improve existing technologies). Part III proposes a novel framework that redefines the scope of trade secret protection in the algorithmic context and revives trade secret law’s early policy objectives. This Note concludes that while algorithms themselves constitute protectable trade secrets, ancillary information—such as training data, performance statistics, or descriptions of the software’s methodology—does not. The disclosure of ancillary information comports with first principles and public demands for algorithmic transparency while maintaining trade secret holders’ proprietary interests.