How Far Should Driverless Cars Go To Save Your Life?

Monday, 27 June 2016 - 6:24AM
Artificial Intelligence
Monday, 27 June 2016 - 6:24AM
How Far Should Driverless Cars Go To Save Your Life?
As driverless cars come closer and closer to taking over our roads, some of the issues at the heart of A.I. development still need answering before a truly driverless road system is made a reality. Should manufacturers create vehicles with various degrees of morality programmed into them, depending on what the consumer wants? Should the government mandate that all self-driving cars share the same value of protecting the greatest good, even if that's not so good for the car's passengers? How will that effect people's willingness to make the switch to driverless cars? 

Opening quote
"Autonomous cars can revolutionize transportation," says cognitive scientist Iyad Rahwan, "But they pose a social and moral dilemma that may delay adoption of this technology,"
Closing quote
 
Rahwan is co-author of a new research study in Science magazine titled "The Social Dilemma of Autonomous Vehicles." The study explores the implications of six online surveys of U.S. residents last year that asked people how they believe autonomous vehicles should behave. In an abstract sense, most people generally approve of the notion that driverless cars should swerve into walls or sacrifice their passengers to save a greater number of pedestrians. But here's the problem - those same people also want to ride in cars that protect passengers at all costs. 

Through a series of quizzes that presented horrible options in which participants had to choose between saving or sacrificing themselves - as well as fellow passengers and family members- to spare others, researchers unsurprisingly found that people would rather stay alive. Each survey presented different situations, like varying the number of pedestrian lives that could be saved, or adding a family member to the problem. In one survey, they discovered that participants were generally hesitant to accept blanket government regulation of A.I. algorithms in driverless cars. 

Thoughts about these results vary from expert to expert, proving that we might be some way off from getting an acceptable solution to this potentially huge problem. 

Opening quote
"Before we can put our values into machines, we have to figure out how to make our values clear and consistent," writes Harvard University philosopher and cognitive scientist Joshua Greene in the same issue of Science.
Closing quote
 
Opening quote
"If you assume that the purpose of A.I. is to replace people, then you will need to teach the car ethics," said Amitai Etzioni, a sociologist at GW. "It should rather be a partnership between the human and the tool, and the person should be the one who provides the ethical guidance."
Closing quote


But psychologist Kurt Gray thinks that a workable compromise can be reached. If all driverless cars are programmed to protect passengers in emergencies, automobile accidents will still decline. Despite being a danger to pedestrians on rare occasions, these vehicles still "won't speed, wont drunk drive, and won't text while driving, which would be a win for society," Gray says.

For more on this issue, we recommend checking out Patrick Lin's excellent TedTalk below.



Top Image Credit: Google




Science
Technology
Artificial Intelligence

Load Comments