Home > Journals > Michigan Law Review > MLR > Volume 123 > Issue 5 (2025)
Abstract
In one sense, America’s newest abolitionist movement—advocating the elimination of policing and prison—has been a success. Following the 2020 Black Lives Matter protests, a small group of self-described radicals convinced a wide swath of ordinary liberals to accept a sweeping claim: Mere reforms cannot meaningfully reduce prison and policing’s serious harms. Only elimination can. On the other hand, abolitionists have failed to secure lasting policy change. The difficulty is crime. In 2021, following a nationwide uptick in homicides, liberal support for abolitionist proposals collapsed. Despite being newly “abolition curious,” left-leaning voters consistently rejected concrete abolitionist policies. Faced with the difficult choice between reducing prison and policing and controlling serious crime, voters consistently chose the latter.
This Article presents and analyzes a policy approach designed to accomplish both goals simultaneously: “Algorithmic Abolitionism.” Under Algorithmic Abolitionism, powerful machine learning algorithms would allocate policing and incarceration. They would maximally abolish both, up to the point at which crime would otherwise begin to rise. Results could be impressive. The best evidence evaluating modern machine learning models suggests that Algorithmic Abolitionist policies could: eliminate at least 40% of Terry stops, with highend estimates above 80%; free a similar share of incarcerated persons; eradicate most traffic stops; and potentially remove police patrols from at least half of city blocks—all without increasing crime.
Beyond these practical effects, Algorithmic Abolitionist thinking generates new and important normative insights in the debate over algorithmic discrimination. In short, in an Algorithmic Abolitionist world, traditional frameworks for understanding and measuring such discrimination fall apart. Traditional frameworks sometimes rate Algorithmic Abolitionist policies as unfair, even when those policies massively reduce the number of people mistreated because of their race. And they rate other policies as fair, even when those policies would cause far more discriminatory harm. To overcome these problems, this Article introduces a new framework for understanding—and a new quantitative tool for measuring—algorithmic discrimination: “bias-impact.” It then explores the complex array of normative trade-offs that bias-impact analyses reveal. As the Article shows, bias-impact analysis will be vital not just in the criminal enforcement context, but in the wide range of settings—healthcare, finance, employment— where Algorithmic Abolitionist designs are possible.
Recommended Citation
Peter N. Salib,
Abolition by Algorithm,
123
Mich. L. Rev.
800
(2025).
Available at:
https://repository.law.umich.edu/mlr/vol123/iss5/2
Included in
Artificial Intelligence and Robotics Commons, Law and Society Commons, Law Enforcement and Corrections Commons