Algorithmic Trust within the Criminal Justice Context

Dr. Stacy Wood

Assistant Professor

School of Computing and Information

University of Pittsburgh

Abstract

Proprietary algorithms are increasingly deployed across the broad spectrum of the criminal justice system within the United States. Despite concerns over the black-boxing of proprietary technologies, algorithms are considered trustworthy tools for removing bias, improving upon human decision-making and managing scarce resources. The investment in algorithms as trustworthy tools is steeped within the socio-cultural context of the criminal justice system and relies upon pre-existing tools, technologies and categories already operationalized. More research, discussion and nuance is needed to address the vacuum of policy, standards and best practices with respect to these algorithms within the context of their rapid implementation.

Executive Summary

Prediction and forecasting and their attendant technologies have pervaded the work of criminal justice for decades at varying degrees of sophistication and integration.[1]Over the past few years, proprietary algorithms designed and maintained by private businesses have begun to pervade the criminal justice system, informing decisions about bail, sentencing and parole.[2] These systems work to determine a defendant’s risk of recidivism or even court appearance, and the rhetoric surrounding their implementation focuses on three proposed positive outcomes: cost and time saving measures, improvement upon the accuracy of human decision-making, ameliorating systematic bias.[3]Government agencies and public entities are not in the business of writing their own algorithms, typically buying or licensing from private businesses, limiting the degree to which everyone from the defense attorney to the arresting police officer can access the details of the decision-making process. The black-boxing of algorithm as a result of claims regarding their status as intellectual property poses a unique challenge to regulators, researchers and anyone in contact with the criminal justice system. Currently, there are no federal laws or broadly accepted standards requiring transparency, evaluating these tools or requiring oversight.

To date, one case serves as precedent, the case of Wisconsin v. Loomis. After being found guilty for his role in a drive-by shooting, Eric Loomis gave answers to a list of questions which were then entered into the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk-assessment tool developed by Northpointe and used by the Wisconsin Department of Corrections. COMPAS assessed Eric Loomis to be “high risk,” and that score was one determining factor in his eventual sentencing by the trial judge.[4] In challenging the sentence, Loomis claimed that it was a violation of his right to due process for the trial court to rely on the risk assessment results of COMPAS for two reasons: the proprietary nature of the product prevents the defendant from presenting a challenge to the scientific validity, accuracy and viability of the risk assessment and because COMPAS specifically takes gender and race into account when calculating risk assessment. In June of this year, the Supreme Court denied Loomis’s petition, stating that knowing the result of the risk assessment was enough transparency for the defendant and the court. [5]

Developing policies and standards for the ways in which algorithms are implemented in the public sphere in general and in criminal justice specifically has been an area sorely neglected. More research is needed to determine how terms like trustworthy, fair, transparent, accountable and accurate are being deployed within the criminal justice context with respect to proprietary algorithms as well as fostering an algorithmically literate justice system. The Supreme Court specifically pointed to the use of algorithmic risk assessment results as part of a suite of determining factors, rather than as an automated or singular tool,[6] but much more nuance should be considered as all parties do not have access to the specific decision-making processes and data points used by COMPAS and other proprietary tools. The speed with which generations of sentencing guidelines, standards and arguments are at a minimum augmented and at an extreme, replaced by algorithms presents an urgent challenge.

Although a 2016 investigation by ProPublica[7] tested the particular COMPAS system used by the state of Florida found that in general, the scores were unreliable in predicting violent crime and re-offense and that the system flagged black defendants almost twice the rate of comparable white defendants, use of these systems does not seem to be waning. Almost every state uses some form of algorithmic risk assessment at some level within the criminal justice system while very few have taken the step of conducting a validity study. The criminal justice system considers these tools trustworthy enough to implement broadly and often without subsequent testing or oversight. How then does COMPAS become understood as trustworthy within these often disparate contexts? I argue that that the trust invested in these systems is part of a much longer history of predictive analytics in criminal justice that has laid the groundwork for these decision-making processes. While the details of how exactly COMPAS works remain obscure, their claims and corporate language are legible within the history of forecasting and predictive analytics in criminal justice, appealing to basic categorical understandings, pre-existing metrics and theoretical frameworks. Northpointe Inc. stated in 2012 that COMPAS incorporated “key scales from several of the most informative theoretical explanations of crime and delinquency including General Theory of Crime, Criminal Opportunity/Lifestyle Theories, Social Learning Theory, Subculture Theory, Social Control Theory, Criminal Opportunities/Routine Activities theory and Strain Theory.”[8]To some extent, COMPAS bolsters its credibility by associating itself with established theories and as such is entering into a pre-existing set of complex socio-technical systems with particular instantiations of systemic bias. The challenge that algorithms pose that is both new and urgent, is the rhetorical weight of claims that bias cannot exist within automated systems. The very qualities that make COMPAS trustworthy within the criminal justice system should create pause at assertions of technological neutrality. How then is trustworthiness being deployed within the criminal justice context? Who trusts what, when and how are key questions that require analysis that considers the algorithms at work within their socio-technical-historical context.

[1]Perry, W. L. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. Rand Corporation.

[2]Barocas, S. and Selbst, A.D. (2016) Big data’s disparate impact. 104 Calif. Law Review 671.

[3]Hamilton, M. (2015). Risk-needs assessment: Constitutional and ethical challenges.

[4]State v. Loomis, No. 16-6387

[5]State v. Loomis, No. 16-6387

[6]State v. Loomis, No. 16-6387

[7]Hall, P., & Gill, N. (2017). Debugging the Black-Box COMPAS Risk Assessment Instrument to Diagnose and Remediate Bias.

[8]COMPAS risk & need assessment system: Selected questions posed by inquiring agencies. Northpointe Inc. (accessed 10/2/17)