Be the first to like.

Share

Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms.

Let’s consider one case where algorithmic accountability is sorely needed: the risk assessment scores that inform criminal-justice decisions in the U.S. legal system. These scores are calculated by asking a series of questions relating to things like the defendant’s age, criminal history, and other characteristics. The data are fed into an algorithm to calculate a score that can then be used in decisions about pretrial detention, probation, parole, or even sentencing. And these models are often trained using proprietary machine-learning algorithms and data about previous defendants.

image: By Zyang666 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18875937

Be the first to like.

Share
MIT Technology Review

Tags: , , , , ,

Leave a Reply