Two children are drowning: your son and a stranger. Who would you save first? Your son, right? What if one of the children was a thinking, feeling robot?
Philosopher Eric Schwitzgebel from the University of California, Riverside, argues that our hypothetical creations would be more than strangers to us in a fascinating op-ed for Aeon. “Moral relation to robots will more closely resemble the relation that parents have to their children,” he writes, “… than the relationship between human strangers.”
Humanity’s fraught relationship with artificial intelligence has been a staple of science fiction since the field of modern computer science was born in the 1950s. As Schwitzgebel puts it:
The moral status of robots is a frequent theme in science fiction, back at least to Isaac Asimov’s robot stories, and the consensus is clear: if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings. Philosophers and researchers on artificial intelligence who have written about this issue generally agree.
image: Amal FM via Flickr – CC BY 2.0
Be the first to like.
MIT Technology Review