Risk assessment algorithms flag Black defendants as high-risk at 2x the rate
criminal-justicecriminal-justice0 views
Pretrial risk assessment tools like COMPAS, which are marketed as objective alternatives to judicial discretion, systematically produce racially disparate outcomes. ProPublica's analysis found that Black defendants were 77% more likely to be flagged as higher risk of committing a future violent crime and 45% more likely to be predicted to commit a future crime of any kind compared to white defendants. Among those who did NOT go on to reoffend, 23% of Black defendants were nonetheless classified as high-risk, compared to just 10% of white defendants. Black defendants were nearly twice as likely to be 'false positives': people labeled dangerous who actually committed no future crime. The tools generate these disparities because they rely on inputs like prior arrests, prior convictions, and residential stability, all of which are shaped by racialized policing and systemic inequality. A person from an over-policed neighborhood has more prior police contacts not because they are more criminal, but because more police are deployed to their block. The algorithm encodes this structural racism as 'risk.' Some jurisdictions have adopted these tools as replacements for cash bail, meaning they have traded one racially biased system for another that has the appearance of scientific neutrality. This persists because the companies selling these tools (like Equivant, which makes COMPAS) profit from their adoption and resist independent auditing, and because 'algorithm' carries an aura of objectivity that makes bias harder to challenge politically.
Evidence
ProPublica 2016 investigation 'Machine Bias' analyzing COMPAS algorithm across 7,000+ cases in Broward County, FL. ProPublica methodology paper showing Black defendants nearly twice as likely to be false positives (23% vs. 10%). Brookings Institution analysis of risk assessment instruments in criminal justice. Marshall Project investigation 'Can Racist Algorithms Be Fixed?' Columbia Law Review and Harvard analysis of how facially neutral inputs encode racial bias. Innovating Justice report 'Beyond the Algorithm' on pretrial reform and racial fairness.