A New Study From Google's DeepMind Shows What Happens When AI Gets Selfish

Thursday, 09 February 2017 - 3:28PM
Technology
Artificial Intelligence
Thursday, 09 February 2017 - 3:28PM
A New Study From Google's DeepMind Shows What Happens When AI Gets Selfish
As our world becomes more and more reliant on artificial intelligence, a vital moral question crops up: if two or more AI systems end up being utilized together, will they choose to cooperate or conflict with one another? In much the same way that humans ultimately have to decide whether it's better to work as a team or go it alone, the same holds true for AI. The key difference of course is that if the AI is left to its own devices, the choices it makes will be out of the control of humans to try and alter it, or to instill any form of empathy in the decision-making.

In order to understand this concept more fully, researchers at Google's AI subsidiary DeepMind published a study that explored whether they could predict how AIs would respond to various situations involving socially conscientious variables—essentially, they wanted to see if two different AIs would choose to compete or work together as a predictive test. The tests were partly based on the famous game theory scenario known as the prisoner's dilemma, and took the form of various games where cooperation and self-interest were pitted against each other.

The first game, called Gathering, has two AI's compete by retrieving apples from a central location. The AI's can choose to just gather the apples or can use a laser to tag the other AI, which temporarily removes them from the game, allowing the remaining AI to gather on its own.



The testers found that when there were plenty of apples the two AIs tended to ignore each other and gather, but as the supply became scarce, the lasers started firing. This is probably to be expected, as the law of supply and demand seems to make most people compete more fiercely. Black Friday, anyone? 

What was more interesting though, was that when a slightly more computationally savvy AI program was introduced into the game, it seemed to just want to fire the laser at the other regardless of the amount of apples. The testers theorize that the phenomenon might be due to the fact that firing the laser took more skill and ability, which the more advanced AI simply might have found more challenging. 

A second game, called Wolfpack, revolved around two AIs essentially trying to corner and hunt a third AI through a course with obstacles involved. What's interesting is that points are given not just to the player that catches the prey, but to anyone that is also close by—so it could actually be helpful to work together.



The testers did, in fact, find that in the second game the AIs worked together more often, but again, it could stem from the fact that cooperating and planning were actually the more challenging and computationally stimulating choice.

What the testers more or less concluded was that AIs respond to the context and rules of the situation and that determines whether they will act as individuals or work together for the betterment of the whole. Apparently, the key to keeping AIs from making cold, logical, and selfish choices is to always establish a context and set of rules that make the choice of working together both the most stimulating and most rewarding choice. 

Perhaps someone should tell that to Sarah Conner.
Science
Science News
Technology
Artificial Intelligence