What’s Really Making Us Uncomfortable—the Use of AI in Evaluating the Likelihood of Recidivism, or the Policy of Sentencing Based on the Likelihood of Recidivism?

By Kimberly G. Koziara

In his recent New York Review article, “Sentenced by Algorithm,” a review of former SDNY judge Katherine Forrest’s book, “When Machines Can be Judge, Jury and Executioner: Justice in the Age of Artificial Intelligence,” current SDNY Judge Jed Rakoff evaluates the many shortcomings of existing AI products intended to predict recidivism rates of past criminal offenders. These products are designed to guide judges in determining whether a defendant’s sentence should be extended on a theory of “incapacitation”—essentially, to protect the general public from the potentiality that the defendant will continue his pattern of criminality in the future. As Rakoff succinctly explains, the current products available have unacceptably high error rates, mostly leaning towards over-predicting future criminality. Moreover, their “black box” design raises concerns regarding the assumptions underlying the algorithm, and the defendant’s ability to effectively challenge the algorithm’s output.

The book review is informative and certainly an interesting primer on the use of AI products in criminal sentencing. But Judge Rakoff’s perhaps most salient point is raised at the very end of his piece: “More broadly, the fundamental question remains: Even if these algorithms could be made much more accurate and less biased than they currently are, should they be used in the criminal justice system in determining whom to lock up and for how long? My own view is that increasing a defendant’s sentence of imprisonment on the basis of hypothesized future crimes is fundamentally unfair.”

The idea of incarcerating someone for a crime he did not, and may never, commit is inherently discomfiting. And when the decision is divorced from human judgement and empathy, it somehow feels even less just, perhaps because of our inherent distrust for what we cannot understand.

The AI products designed to predict recidivism may not be well-developed yet, but if the current trajectory of AI generally is any indicator, these products could very soon become more sophisticated and more accurate—and almost certainly more accurate, on the whole, than any individual human judgement. That is when we will have to ask the real question—the hard question—that Judge Rakoff raises.