Is It Ethically Wrong to Create Ethical Robots?

Monday, 19 October 2015 - 1:04PM
Artificial Intelligence
Robotics
Monday, 19 October 2015 - 1:04PM
Is It Ethically Wrong to Create Ethical Robots?
Between drones and driverless cars, modern robotics technologies are increasingly calling for robots that are capable of making ethical decisions. Many have expressed concern that driverless cars will need to make split-second ethical decisions when faced with potential casualties on the road, and with drone warfare becoming more and more autonomous, machines may soon be making life-or-death decisions on a regular basis. But what does it mean to create an ethical robot, and is it ethical for us to do so? Some theorists have questioned whether it would be a morally questionable decision to create moral robots, as they would likely also be capable of suffering.

Some conceptions of the ethical robot do not necessarily involve sentience, but rather a sophisticated algorithm that gives the robot a general facsimile of moral reasoning. In a recent study published in Nature, a caretaker robot was tasked to give experiment subjects their medication each day, but needed a means of deciding on a course of action in case the subjects refused their medication. If the robot allowed the subject to skip a dose, then he or she might be in danger, but if the robot forced the subject to take his or her medication, then it would be infringing on his or her autonomy.

In order to teach the robot moral reasoning, the researchers gave it access to examples of ways that bioethicists had resolved similar questions involving autonomy and harm. The robot would then use learning algorithms in order to sort through the examples and derive strategies to guide them through situations. There are risks involved in this method, however; because the designers don't explicitly set rules for the robot, they have no way of knowing exactly which ethical rules the robots will decide on. 

Furthermore, many robot ethicists have argued that human emotion is central to ethics, and therefore that it would be necessary to program certain emotions into an ethical robot. Asserting that a robot can learn morality through a purely rational and logical algorithm is erroneously falling into what philosopher Hilary Putnam called "the comfortable eighteenth century assumption that all intelligent and well-informed people who mastered the art of thinking about human actions and problems impartially would feel the appropriate 'sentiments' of approval and disapproval in the same circumstances unless there was something wrong with their personal constitution." 

Guilt, for example, is considered to be essential to most mainstream conceptions of morality. Although logical moral reasoning often necessitates killing on the battlefield, for example, if one believes that the war as a whole is justified, conventional wisdom dictates that a "good" person will feel some measure of guilt about killing another human being, and will therefore treat the decision with more gravitas than, say, a drone. In Ronald Arkin's influential report for the Department of Defense entitled "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture," Arkin argued that it would be possible to program feelings such as guilt, remorse, or grief into a robot, essentially by ensuring negative feedback after the robot commits an atrocity.

But suffering inherently accompanies feelings such as guilt and remorse, which brings up a whole host of other ethical questions. Ironically, it may be unethical to create ethical robots, especially for warfare, as they would essentially be sentient beings brought to life for the sole purpose of killing and then suffering. It seems that it would be extremely problematic to build robots that achieve some measure of personhood in order to carry the burden of human atrocities. As New York Institute of Technology professor Kevin LaGrandeur wrote for the Institute for Ethics and Emerging Technologies


Opening quote
"If a machine could truly be made to 'feel' guilt in its varying degrees, then would we have problems of machine suffering and machine 'suicide'? If we develop a truly strong Artificial Intelligence (AI) we might, and then we would face the moral problem of creating a suffering being.
Closing quote
Science
Technology
Artificial Intelligence
Robotics

Load Comments