Curious minds select the most fascinating podcasts from around the world. Discover hand-piqd audio recommendations on your favorite topics.
piqer for: Global finds Technology and society Health and Sanity
Nechama Brodie is a South African journalist and researcher. She is the author of six books, including two critically acclaimed urban histories of Johannesburg and Cape Town. She works as the head of training and research at TRI Facts, part of independent fact-checking organisation Africa Check, and is completing a PhD in data methodology and media studies at the University of the Witwatersrand.
Humans have been trying to predict the future for aeons: our ancestors read stars or tea leaves looking for signs. These days we like to talk about 'scientific' concepts like P values and Bayesian inference, but, too often, these phrases have more in common with discredited practices like phrenology — a Victorian pseudoscience that hypothesised it was possible to read a person's character from the shape of his or her skull. Skull reading soon became a popular method of divining a person's criminal potential. Which, it turns out, may have been nearly as accurate as modern crime prediction algorithms.
It's been a year since investigative journalism group ProPublica published their seminal report on the racism of crime prediction algorithms used by the American justice system. The impact of these findings is still, slowly, trickling into public discussion around machine bias.
The crime prediction algorithms, developed and owned by for-profit companies that refuse to reveal proprietary code, form the backbone of software that performs risk assessments on criminal offenders, structuring complex questionnaires to calculate the odds of the person re-offending in the future. In several US states this information is given to judges before sentencing.
ProPublica studied risk scores assigned to 7,000 people and found the ratings were remarkably unreliable — only 'somewhat more accurate than a coin flip'. They also uncovered significant and unaccounted-for racial disparities.
While 'crime prediction' software was intended as a mitigation tool to reduce or even remove jail time where assessed risk was low, the algorithms consistently ranked black convicts as higher risk than whites even when the ranking was not justified by the original crime. ProPublica's research also exposed how the software missed recidivism in white convicts because of their race.
ProPublica's work is important because it shows empirically how inadvertent human bias directly influences human-made code.
Wow! As an "expert" in the assessment and calculation of risk, it's clear that the author know little or nothing about the subject. I am sure it's true that there is some software around that may claim to assist in assessing the risk of reoffending among as criminal population, but I seriously doubt that Bayesian models were used and if they were the model would only take into account those factors that the modeller felt relevant or influential. This is like buying a spreadsheet, filling it full of numbers and assuming the answers are correct! The spreadsheet may have added up the columns correctly but garbage in garbage out still applies. "I bought a spreadsheet and it says I'll be a millionaire by tomorrow lunch time!" doesn't mean a thing.