That people would generally prefer to minimize casualties in a hypothetical autonomous car crash has been found to be true in past research, but what happens when people are presented with more complex scenarios? And what happens when autonomous vehicles must choose between two scenarios in which at least one individual could die? Who might those vehicles save, and on what basis do they make those ethical judgments?
It may sound like a nightmarish spin on “would you rather,” but researchers say such thought experiments are necessary to the programming for autonomous vehicles and the policies that regulate them. What’s more, the responses around these difficult ultimatums may vary across cultures, revealing there’s no universality in what people believe to be a morally superior option.
In one of the largest studies of its kind, researchers with MIT’s Media Lab and other institutions presented variations of this ethical conundrum to millions of people in ten languages across 233 countries and territories in an experiment called the Moral Machine, the findings of which were published in the journal Nature this week.
…continue reading ‘Self-Driving Cars Can’t Choose Who to Kill Yet, But People Already Have Lots of Opinions‘
Image: By Mcha6677 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=70786982
Be the first to like.
Gizmodo