Why the Military's 'Ethical Robots' May Decide to Wipe Out Humanity

Monday, 25 May 2015 - 12:36PM
Artificial Intelligence
Robotics
Science of Sci-Fi
Monday, 25 May 2015 - 12:36PM
Why the Military's 'Ethical Robots' May Decide to Wipe Out Humanity

For years, film and TV have featured murderous robots who look upon humanity and find us imperfect, dangerous, or morally dubious and the results are never pretty. The character of Ultron in "Avengers: Age of Ultron" is only the latest in a series of robots bent on wiping out humanity. Though he is created to be one of the good guys, aiding the Avengers in their efforts to protect the human race, he concludes that the only true way to save the world is to destroy the people who insist on destroying themselves.

 

Robots designed to make autonomous, ethically reasoned choices about the fate of human beings, from Skynet to the Cybermen, have consistently decided to kill us in our science fiction. And pretty soon, they could be making those same decisions in real life.

The Office of Naval Research is currently funding research on programming moral decision-making in autonomous robots. Though the military does not allow for fully autonomous robots at the moment, some people (clearly people who have avoided science fiction since at least the 1960s) are hopeful that we will one day see moral autonomous robots. Wendell Wallach, the chair of the Yale Technology and Ethics Study Groups told The Atlantic, "One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action."

Courses of action that a human would not consider. Such as wiping out humanity in order to protect it. 

Ultron, like the Cybermen of Doctor Who, sees robots as the next generation, the inheritors of Earth. His decision to cause humanity's extinction, according to his code of ethics, is the morally correct one. He is killing for the "greater good," though it is a greater good that is easily condemned by the Avengers, whose "possible courses of action" may be more limited, but definitely place more value on human life. 

It is for this reason that AI robotics expert Noel Sharkey believes that the creation of a moral or ethical autonomous robot is impossible, stating, "For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won't really care." 

The Star Trek episode "The Ultimate Computer" has the crew of the Starship Enterprise stuck with a murderous computer system, the M-5, that has taken over the ship. The M-5 follows a ethical system which contends that those who have shown murderous behavior, such as the people on the ships that attack the Enterprise, should be lethally punished. However, the M-5 is stopped when Captain Kirk points out that the computer is committing murderous behavior and the M-5 agrees that it deserves to be punished as well. It is strange that so few highly intelligent, ethically coded robots in science fiction are so quick to judge humanity, yet do not see the same ethical failings in their own retaliatory actions. 

Maybe it is because they just don't care. 

The safest method is probably implementing Isaac Asimov's "Three Laws of Robotics" (yes, in the film version of "I, Robot" the robots find a way around it, but that film will probably be one of the many failings that bring about an ethical annihilation of humanity). The first law of robotics: "A robot may not injure a human being, or through inaction, allow a human being to come to harm," would prevent all robot-doomsday scenarios. 

But since it's specifically the military that is experimenting with moral reasoning in robots, a "no-kill rule" would present a problem. With a possible plan of lethal autonomy in robots, it would seem that a blanket "no killing humans" rule is the opposite of the programmer's ultimate goals. But how do you make an ethical killing machine? Program it to be ethical only some of the time? Or that killing one group of people is morally superior to killing another?

We have always had an obsession with ethical robots, from 2015's Ultron to 1968's M-5. But this is the first time that we might actually face the consequences of a robot's ethical judgement. Because, when we go to the movies, it is not actually Ultron who is judging us. It is the writer, the actors, and ultimately, the audience, who accepts that a completely logical being capable of making the best moral decision would condemn humanity to annihilation. Because some part of us believes they are right.

So if the military is planning to put the "lethal" in "lethal autonomy," they might want to consider how that might be interpreted by an ethical killing machine. 

Science
Technology
Artificial Intelligence
Robotics
Science of Sci-Fi

Load Comments