Channels
Log in register
piqd uses cookies and other analytical tools to offer this service and to enhance your user experience.

Your podcast discovery platform

Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.

You are currently in channel:

Technology and society

Nechama Brodie
Author, fact-checker and academic
View piqer profile
piqer: Nechama Brodie
Saturday, 29 April 2017

Crime Prediction Software Is Racist. Why Are We Still Trusting Computers To Deliver Justice?

Humans have been trying to predict the future for aeons: our ancestors read stars or tea leaves looking for signs. These days we like to talk about 'scientific' concepts like P values and Bayesian inference, but, too often, these phrases have more in common with discredited practices like phrenology — a Victorian pseudoscience that hypothesised it was possible to read a person's character from the shape of his or her skull. Skull reading soon became a popular method of divining a person's criminal potential. Which, it turns out, may have been nearly as accurate as modern crime prediction algorithms.

It's been a year since investigative journalism group ProPublica published their seminal report on the racism of crime prediction algorithms used by the American justice system. The impact of these findings is still, slowly, trickling into public discussion around machine bias.

The crime prediction algorithms, developed and owned by for-profit companies that refuse to reveal proprietary code, form the backbone of software that performs risk assessments on criminal offenders, structuring complex questionnaires to calculate the odds of the person re-offending in the future. In several US states this information is given to judges before sentencing.

ProPublica studied risk scores assigned to 7,000 people and found the ratings were remarkably unreliable — only 'somewhat more accurate than a coin flip'. They also uncovered significant and unaccounted-for racial disparities. 

While 'crime prediction' software was intended as a mitigation tool to reduce or even remove jail time where assessed risk was low, the algorithms consistently ranked black convicts as higher risk than whites even when the ranking was not justified by the original crime. ProPublica's research also exposed how the software missed recidivism in white convicts because of their race.

ProPublica's work is important because it shows empirically how inadvertent human bias directly influences human-made code.

Crime Prediction Software Is Racist. Why Are We Still Trusting Computers To Deliver Justice?
7.5
2 votes
relevant?

Would you like to comment? Then register now for free!

Comments 3
  1. Chris Baker
    Chris Baker · Created about 2 years ago ·

    Wow! As an "expert" in the assessment and calculation of risk, it's clear that the author know little or nothing about the subject. I am sure it's true that there is some software around that may claim to assist in assessing the risk of reoffending among as criminal population, but I seriously doubt that Bayesian models were used and if they were the model would only take into account those factors that the modeller felt relevant or influential. This is like buying a spreadsheet, filling it full of numbers and assuming the answers are correct! The spreadsheet may have added up the columns correctly but garbage in garbage out still applies. "I bought a spreadsheet and it says I'll be a millionaire by tomorrow lunch time!" doesn't mean a thing.

    1. Nechama Brodie
      Nechama Brodie · Created about 2 years ago ·

      I think the point is that we don't *know* how the software calculates risk, because the software is IP protected and its developers won't discuss or share that information. All the reporters can do is comment on the effects of the software as it is applied – which is what the report shows.

    2. Chris Baker
      Chris Baker · Created about 2 years ago ·

      @Nechama Brodie Like any computer system, an inference engine like Bayes, depends entirely on what inputs are provided a and the weights applied. My guess is that the model is poor and the numbers inaccurate. If they don't let anyone know what the calculations are even based on they shouldn't be using it to make life-changing decisions! Don't blame the computers, it's the idiots running them that are at fault. They might as well be casting runes as using a bad model.