Philosophers Claim that Robots Masquerading as Humans Increases the Evil in the World

Thursday, 30 October 2014 - 3:56PM
Thursday, 30 October 2014 - 3:56PM

In June, chatbot Eugene Goostman became the first robot to pass the Turing test, a test for "true" artificial intelligence which measures whether it can deceive humans into believing that it is also human. Although Eugene was relatively primitive, and likely only passed the test because the judges thought he was a young non-native English speaker, in reality many of us interact with robots that we believe are people every day (Twitter bots are the best example of this). And even in the case of AIs that we are fully aware are bots, like Apple's Siri, we often interact with them as though they are human. But what effects does this relationship, particularly in the case of outright deception, have on society?




Several philosophers who specialize in the ethics of artificial intelligence recently published a paper which explores whether bots acting as humans is inherently unethical. They came to the conclusion that, while the act itself may not be morally wrong, it will have an overall deleterious effect on society. "When a machine masquerades, it influences the behaviour or actions of people, not only towards the robot, but also towards other people," they wrote. "Even when the masquerade itself does not corrupt the infosphere, it changes the infosphere by making it more difficult for agents to make sound ethical decisions, increasing the chance for evil."


In other words, people need all of the information about a situation in order to decide which course of action is ethical. The authors used the example of Twitter bots in order to illustrate this point; if a person sees that a bot follows them and is trying to decide whether to follow it back, he or she may incorrectly assume that the bot has the same moral status as a human, and therefore may needlessly agonize over the decision in order to be considerate of non-existent feelings. This may not be the best example, as it would take a relatively convoluted argument to prove that this would cause the person to act unethically or less ethically.


So they use another example, in which firefighters are trying to save people from a burning building. If some of the "people" are actually robots, they may waste valuable time and resources saving the robots as a result of the robots' deception. According to the authors, there's even a possibility that a case like Siri's, in which the "deception" is only a subconscious forgetfulness on the part of the user, could cause people to act unethically; they imagine a case "where a well-liked robot is saved and in the process human lives are lost."


But even if a bot's deception is unethical, does that mean the bot itself is immoral? The researchers say no, that a bot can only have moral responsibility if it has agency. As of now, if a bot acts unethically, it is the responsibility of the developer, and possibly the CEO: "multi-agent systems that have at least one human agent." However, they leave room for the possibility that a robot could, theoretically, have moral agency, they just don't think any AIs currently in existence fit that description. 

Artificial Intelligence

Load Comments