In lieu of an abstract, here is a brief excerpt of the content:

  • Algorithms, Correcting Biases
  • Cass R. Sunstein (bio)

ALGORITHMS, BAIL, AND JAIL

Are algorithms biased? If so, in what respect? These are large questions, and there are no simple answers. My goal here is to offer one perspective on them, principally by reference to some of the most important current research on the use of algorithms for purposes of public policy and law. I offer two claims. The first, and the simpler, is that algorithms can overcome the harmful effects of cognitive biases, which can have a strong hold on people whose job it is to avoid them and whose training and experience might be expected to allow them to do so. Many social questions present prediction problems, where cognitive biases can lead people astray; algorithms can serve as a corrective (see Kleinberg et al. 2015, 105).

In a way, this should be an unsurprising claim. Some of the oldest and most influential work in behavioral science shows that statistical prediction often outperforms clinical prediction; one reason involves cognitive biases on the part of clinicians (Meehl [1953] 2013). Algorithms can be seen as a modern form of statistical prediction, and if they avoid biases, no one should be amazed. What I hope to add here is a concrete demonstration of this point in an important context, with some general remarks designed to address the concern that algorithms are biased.

The second claim, and the more complex one, is that algorithms can be designed so as to avoid racial (or other) discrimination in its unlawful forms—and also raise hard questions about how to [End Page 423] balance competing social values (Kleinberg et al. 2019). When people complain about algorithmic bias, they are often concerned about discrimination on the basis of race and sex. The word "discrimination" can be understood in many different ways. It should be simple to ensure that algorithms do not discriminate in the way that American law most squarely addresses. It is less simple to deal with forms of inequality that concern many people, including an absence of "racial balance." As we shall see, algorithms allow new transparency about some difficult tradeoffs (Kleinberg et al. 2018; Kleinberg et al. 2019).

The principal research on which I focus comes from Jon Kleinberg, Himabinku Lakkaraju, Jure Leskovec, Jens Lutwig, and Sendhil Mullainathan, who explore judges' decisions about whether to release criminal defendants pending trial (Kleinberg et al. 2018, 237). Their goal is to compare the performance of an algorithm with that of actual human judges, with particular emphasis on the solution to prediction problems. It should be obvious that the decision about whether to release defendants has large consequences. If defendants are incarcerated, the long-term consequences could be very severe. Their lives could be ruined, or nearly so. But if defendants are released, they might flee the jurisdiction or commit crimes. People might be assaulted, raped, or killed.

In some states, the decision whether to allow pretrial release turns on a single factor: flight risk. To make their decision, judges have to solve a prediction problem: What is the likelihood that a defendant will flee the jurisdiction? In other states, the likelihood of crime also matters, and it too presents a prediction problem: What is the likelihood that a defendant will commit a crime? (As it turns out, flight risk and crime are closely correlated, so that if one accurately predicts the first, one would accurately predict the second as well.) Kleinberg and his colleagues built an algorithm that uses, as inputs, the same data available to judges at the time of a bail hearing, such as prior criminal history and current offense. Their central finding is that along every dimension that matters, the algorithm does much better than real-world judges. Among other things: [End Page 500]

  1. 1. Use of the algorithm could maintain the same detention rate now produced by human judges and reduce crime by up to 24.7 percent. Alternatively, use of the algorithm could maintain the current level of crime reduction and reduce jail rates by as much as 41.9 percent. That means that if the algorithm were used instead of judges, thousands of crimes could be prevented without jailing even one additional...

pdf

Share