Biased algorithms and their effects are something I’ve been interested in exploring recently. It’s not a problem with Mathematics or Computer Science per se — humans with implicit bias come to false conclusions all the time. We’re the source of these problematic algorithms after all. The problem is that these bad assumptions can be deployed on a massive scale and aren’t questioned because we think of the math as infallible.

A recent episode of 99% Invisible, The Age of the Algorithm, discusses this topic and gives some examples of where it is having real, negative effects today.

Most recidivism algorithms look at a few types of data — including a person’s record of arrests and convictions and their responses to a questionnaire — then they generate a score. But the questions, about things like whether one grew up in a high-crime neighborhood or have a family member in prison, are in many cases “basically proxies for race and class,” explains O’Neil.

Essentially, any time you use historical data that was effected by a bias to influence the future, you risk perpetuating that bias.

If you’re interested, Cathy O’Neil also wrote a book called Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.