Algorithmic pretrial risk tools encode racial bias from historical data

criminal-justice+10 views
Jurisdictions across the country have adopted algorithmic risk assessment tools to help judges decide whether to release defendants pretrial. Tools like the Public Safety Assessment (PSA), developed by Arnold Ventures, and COMPAS, developed by Northpointe (now Equivant), score defendants on their likelihood of failing to appear in court or committing a new crime. These scores are derived from historical criminal justice data, including prior arrests, convictions, and failures to appear. The fundamental problem is that historical criminal justice data is contaminated by decades of racially biased policing and prosecution. Black Americans are arrested at 2.6 times the rate of white Americans according to the Bureau of Justice Statistics, not because they commit more crime at that ratio, but because of disparate policing of Black neighborhoods. When an algorithm trains on this data, it learns that being Black (or living in a Black neighborhood, or having prior arrests that were themselves products of biased policing) predicts higher risk. ProPublica's 2016 analysis of COMPAS found that the tool was nearly twice as likely to falsely flag Black defendants as future criminals compared to white defendants. This persists because the tools are marketed as objective and scientific, which gives judges political cover. A judge who releases someone based on an algorithm can point to the score if the person reoffends. Tool developers argue their products are race-neutral because they do not use race as an explicit input variable, ignoring the well-documented reality that proxy variables like zip code and prior arrest history are deeply correlated with race. The root cause is that no federal standard governs the validation, auditing, or transparency of pretrial risk assessment tools. Each jurisdiction adopts whichever tool it chooses, often without independent validation on its local population. The tools are frequently proprietary, making independent auditing impossible. The result is that an opaque algorithm, trained on biased data, influences whether a human being sits in a cage awaiting trial.

Evidence

ProPublica's 2016 investigation of COMPAS found Black defendants were 77% more likely to be flagged as higher risk of violent recidivism than white defendants (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). Bureau of Justice Statistics data shows Black Americans arrested at 2.6x the rate of whites (https://bjs.ojp.gov/). The Partnership on AI published a 2019 report on risk assessment tool limitations (https://partnershiponai.org/paper/report-on-algorithmic-risk-assessment-tools-in-the-u-s-criminal-justice-system/).

Comments