Ethical Robotics

November 2, 2018

By Amelia Hood

 

Isaac Asimov’s Three Laws of Robotics were first published in the short story collection I, Robot in 1950. Asimov was a biochemist and a prolific author of science fiction, and the type of anthropomorphized, “positronic” robots he featured in his stories existed well in the future (i.e. the Three Laws were said to have been published in 2058). His Three Laws, however, did go on to inspire many people to become roboticists and have influenced the ways in which roboticists think about ethics in their field.

 

Roboticists at the Applied Physics Laboratory teamed up with ethicists from the Berman Institute to start to do the work of creating robots that can behave with an ethical code inspired by the Three Laws, and to think about what that might look like when programming them to act. In Ethical Robotics: Implementing Value-Driven Behavior in Autonomous Systems, they are focusing on the first clause of the First Law: A robot must not harm a human.

 

The investigators, David Handelman, Ariel Greenberg, Bruce Swett, and Julie Marble, along with ethicists Travis Rieder and Debra Mathews, are working with semi-autonomous systems, a type of machine that is not totally controlled by humans, unlike a chainsaw, but is programmed to have some degree of freedom to act on its own. Self-driving cars are an example of semi-autonomous machines that have garnered much recent debate. The roboticists on this project are working with machines that have “arms” and “hands” that are used to assist with surgery and physical therapy, or to defuse bombs and perform battlefield triage.

 

The team’s first step to embed a moral code into a semi-autonomous system is to ensure that the robot can perceive the moral salience of features in its surroundings. Many robots can “see” and categorize things in their surroundings, usually by analyzing pixels from a camera, or using motion or heat sensors. Systems can be trained to distinguish between living and non-living things, can identify and open doors, and can perform intricate tasks like surgery or bomb disposal with the aid of humans. In order to follow the First Law, however, it is required that, in addition to accurately perceiving what is in its surroundings, the robot must be able to tell if something it is seeing is capable of being harmed.

 

The investigators thus are attempting to teach the robot to ‘see’ which objects in its view have minds. Having a mind is a condition, they argue, of being capable of suffering, and therefore a prerequisite of being subject to harm. This includes, of course, physical harm, but also includes psychological, financial, cultural, or other types of harm.

 

From here, the investigators then put forth a framework that distinguishes three types of injury that a robot might cause: 1. harm: damage to an entity with moral status, a being with a mind; 2. damage: damage to an entity without moral status, an object; and 3. harm that is a consequence of damage: damage that might be inflicted upon an object, but which causes harm to a being with moral status (e.g., destroying a child’s teddy bear might cause emotional harm).

 

Embedding these classifications will take place in addition to, or on top of, the robot’s existing perception capabilities. The roboticists have conceived ‘moral vision’ in a semi-autonomous robot so that it classifies objects into those things that have minds and those that don’t. It then also will assess the relationships between the non-minded objects to the minded ones. For example, an object could be used to cause harm, like a blade, or damage to an object could cause non-physical harm, like a ruined teddy bear.

 

 

It is important to consider the ways in which our own, or roboticists’ own, moral vision might be imperfect. Humans have been shown to have more empathy for non-human objects that look like humans. We also tend to anthropomorphize—attribute human characteristics to non-human objects—like when we call our computers ‘stupid.’ We also de-humanize: a customer service representative is just a voice on the line who speaks for a corporation. It’s clear that these tendencies can affect how we act in certain situations. If we program a robot to share our biases, these tendencies will also become encoded and affect the robot’s actions.

 

This is just a first step. Perceiving “mindedness” and the classifying moral salience  inform how a robot will act, or learn to act, in certain contexts. The second step of creating a more ethical robot is to then, of course, program into the robot what philosophers sometimes call ‘deontic constraints,’ which would limit the actions it is permitted to do by virtue of the possible harms it could cause.

 


 

The Practical Ethics Symposium will be held on November 14 in Feinstone Hall at the Bloomberg School of Public Health. The Symposium will feature presentations from all of the 2018 awardees of the JHU Exploration of Practical Ethics Program. Follow this link for more information and to RSVP to the Symposium.

 

Cover image via Flickr: AttributionNoncommercialShare Alike Some rights reserved by Si-MOCs

 

2 people like this post.

Share

Contributors
Amelia Hood
Practical Ethics

Tags: , , , , ,

Leave a Reply