Scientists Teach Self-Driving Cars to Make Ethical Decisions

Tuesday, 11 July 2017 - 2:12PM
Technology
Artificial Intelligence
Tuesday, 11 July 2017 - 2:12PM
Scientists Teach Self-Driving Cars to Make Ethical Decisions
Image credit: Grendelkahn

In discussions surrounding artificial intelligence (AI), many people have the lingering worry in the back of their head that AI tech will one day become just a little bit too human. This worry might grow a little with one new study that explores the possibility of creating algorithms that allow self-driving cars to make human-like moral and ethical decisions.

Scientists worked out that the moral and ethical decisions we make can be boiled down to what they call a "value-of-life-based model." Basically, they created an algorithm that, when faced with obstacles in the road (inanimate objects, animals, and humans), would use human-like judgment to determine the value of life, so to speak. Previously, it was thought that moral decisions cannot be modeled and are too variable and dependent on circumstance, but that has (apparently) turned out to not be the case.


Image Credit: Wikimedia Commons


So, this type of algorithm could clearly change a lot of existing and developing technology in a pretty big way. Everything from AI drones to medical devices could, one day, contain algorithms that allow them to make ethical decisions. Machines that once only completed tasks directly programmed for them would now not only be learning how to make their own decisions but would be factoring in a human-like morality. Despite the creep factor, however, there are some positives: it is estimated that self-driving cars could one day save 300,000 lives per decade. With this development perhaps that number could be even higher.


But how far do we go? Professor Peter König, a senior author of the paper, says "We have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans." Do we continue modifying machinery until it is indistinguishable from our own minds or...better? It's difficult to say. There are enough problems in the world, and we certainly do not need a humanoid robot uprising to add to the list. But because the technology has the capacity to save lives, at least when it comes to self-driving cars, it is difficult to make a judgment call. It might be easy for these machines to make the "correct" moral decision, but for humans, things are still a bit more complicated.

Science
Science News
Technology
Artificial Intelligence
No