From facial recognition software to the controversial robotic “police dogs,” artificial intelligence is becoming an increasingly prominent aspect of the legal system. AI even allocates police resources to different neighborhoods, determining how many officers are needed in certain areas based on crime statistics. But can algorithms determine the likelihood that someone will commit a crime, and if they can, is it ethical to use this technology to sentence individuals to prison?
Algorithms that attempt to predict recidivism (the likelihood that a criminal will commit future offenses) sift through data to produce a recidivism score, which ostensibly indicates the risk a person poses to their community. As Karen Hao explains for the MIT Technology Review,
The logic for using such algorithmic tools is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether for rehabilitation or for prison sentences. In theory, it also reduces any bias influencing the process, because judges are making decisions on the basis of data-driven recommendations and not their gut.
Human error and racial bias contribute to over-incarceration, so researchers are hoping that color-blind computers can make better choices for us.
But in her book When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence, former judge Katherine B. Forrest explains that Black offenders are far more likely to be labeled high-risk by algorithms than their white counterparts, a fact which further speaks to the well-documented racial bias of algorithms. As Hao reminds us,
populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.
Because this technology is so new and lucrative, companies are extremely protective of their algorithms. The COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions), created by Northpointe Inc., is the most widely used recidivism predictor in the legal system, yet no one knows what data set it draws from or how it’s algorithm generates a final score. We can assume the system looks at factors like age and previous offenses, but beyond that, the entire process is shrouded in mystery. Studies also suggest that recidivism algorithms are alarmingly inaccurate; Forrest notes that systems like COMPAS are incorrect around 30 to 40 percent of the time. This means that for every ten people COMPAS labels low-risk, 3 or 4 will eventually relapse into crime. Even with a high chance for error, recidivism scores are difficult to challenge in court. In a lucid editorial for the American Bar Association, Judge Noel L. Hillman explains that,
A predictive recidivism score may emerge oracle-like from an often-proprietary black box. Many, if not most, defendants, particularly those represented by public defenders and counsel appointed under the Criminal Justice Act because of indigency, will lack the resources, time, and technical knowledge to understand, probe, and challenge the AI process.
Judges may assume a score generated by AI is infallible, and change their ruling accordingly.
In his article, Hillman makes a reference to Loomis v. Wisconsin, a landmark case for recidivism algorithms. In 2016, Eric Loomis was arrested for driving a car that had been involved in a drive-by shooting. During sentencing, the judge tacked an additional six years onto his sentence due to his high COMPAS score. Loomis attempted to challenge the validity of the score, but the courts ultimately upheld Northpointe’s right to protect trade secrets and not reveal how the number had been reached. Though COMPAS scores aren’t currently admissible in court as evidence against a defendant, the judge in the Loomis case did take it into account during sentencing, which sets a dangerous precedent.
Even if we could predict a person’s future behavior with complete accuracy, replacing a judge with a computer would make an already dehumanizing process dystopian. Hillman argues that,
When done correctly, the sentencing process is more art than science. Sentencing requires the application of soft skills and intuitive insights that are not easily defined or even described. Sentencing judges are informed by experience and the adversarial process. Judges also are commanded to adjust sentences to avoid unwarranted sentencing disparity on a micro or case-specific basis that may differ from national trends.
In other words, attention to nuance is lost completely when defendants become data sets. The solution to racial bias isn’t to bring in artificial intelligence, but to strengthen our own empathy and sense of shared humanity, which will always produce more equitable rulings than AI can.