Google's DeepMind Just Beat the World Master at Go - Here's What That Means for the Future of AI

Tuesday, 15 March 2016 - 11:16AM
Science News
Tuesday, 15 March 2016 - 11:16AM
Google's DeepMind just challenged the world champion of the Chinese game Go to a series of five matches--and beat humanity four to one. In the last of the five matches, AlphaGo made a significant mistake against human player Lee Sedol, but then "clawed its way back," according to DeepMind founder Demis Hassabis.

Google's deep learning artificial intelligence beat Sedol in the first three games of the series, which secured the overall win and demonstrated an AI breakthrough that was previously thought to be decades away. Then, it lost the fourth game (probably out of exhaustion and complacency, just like a human would), and came back to win the fifth game, which was a nail-biter.

So what does this mean? Winning against a human being at Go has long been hailed as a holy grail of artificial intelligence, since it is even more complex and has even more possible positions than the game of chess. Go has 10 to the 700th power variations of the game, while chess, which was once considered to be the ultimate challenge for artificial intelligence, has only 10 to the 60th power variations.

The program, called AlphaGo, was specifically designed to play and master Go using extremely sophisticated neural networks. Specifically, it uses "value networks" in order to evaluate the positions of the pieces on the board, and "policy networks" to plan its next move. The researchers took a novel approach in training the program, and combined both supervised observation of adult expert games of Go and trial-and-error from self-play. As a result, the program was able to beat other Go computer programs at a rate of 99.8%, and then defeated the human European champion, Fan Hui, in five games in a row before challenging Sedol.

While AlphaGo has a very limited purpose, the deep learning technology that allows it to learn and improve through trial and error could have enormous implications for artificial intelligence. In the short term, it could improve searching capabilities and predictive power for technologies like Siri; Google's Francois Chollet asserted that a version of the AlphaGo tech "will be in your pocket in 2 years." In the long term, the "neural network" is getting closer and closer to approximating the type of learning of which a human brain is capable.

Opening quote
"Our neural network training algorithms are more important to #AlphaGo performance than the hardware it uses to play," Hassabis said.
Closing quote

However, that doesn't mean we're there yet, or even close. This is a huge breakthrough, and an AI undeniably has to be very advanced to beat the world champion at a complex game like Go, but it still has only been tested in a limited scope. The computing power and ability to learn will allow the tech to have all sorts of applications, but it doesn't necessarily mean that this is a precursor to more nuanced "thinking" that would be required for AI to fulfill sci-fi's predictions of consciousness or "humanlike" intelligence. As The Guardian points out, everyone was making Skynet predictions after IBM's Watson beat the world champion at chess, but the realization of those predictions is still not even on the horizon. 

That being said, the two technologies are very different. Watson was a work of engineering specifically tailored to solve the problem of chess, while AlphaGo is, by all accounts, a generalized artificial intelligence that was honed for playing Go with additional hardware. So while we may not be looking at the synths from AMC's Humans anytime soon, this tech could still revolutionize artificial intelligence as we know it.
Artificial Intelligence
Science News

Load Comments