Google's AlphaGo Quickly Taught Itself to Become a Chess Master

Wednesday, 06 December 2017 - 8:46PM
Technology
Artificial Intelligence
Wednesday, 06 December 2017 - 8:46PM
Google's AlphaGo Quickly Taught Itself to Become a Chess Master
Wikimedia Commons
Not content with building a computer that can thoroughly trounce any living human at a single board game like Go, Google's DeepMind engineers have been expanding AlphaGo's repertoire by letting it play around with other traditional competitive pastimes as well.

The powerful machine learning program was just recently significantly upgraded so that it no longer requires human input in order to learn new skills; all it needs to do is figure things out for itself by playing versions of a game endlessly against a slight variation of itself until it's figured out all the rules, potential movies, and sneaky winning strategies.

This new "AlphaGo Zero" was initially tested on the Japanese board game Go, but now, scientists have also given it the chance to study Western chess and the similar Japanese game Shogi. In under a day, AlphaGo was able to learn so much about these games that it was capable of outperforming top tier human players, suggesting that the AI can potentially be used for a wide variety of different tasks beyond simply playing the same game over and over.

While there's a wealth of difference between board games and, say, more complex video games like Starcraft (which Facebook's AI is still miserably bad at), Google's DeepMind Technologies will no doubt continue to test AlphaGo to see what other skills the computer program can be taught.



With enough experimentation, we may find a wide variety of common tasks that are currently be performed by humans, which will be better suited to leaving to automated artificial intelligence. The new AlphaGo is designed to be customization for plenty of different scenarios, and will probably find a lot of shortcuts for its users that will make life a lot easier.

As much as humans like to trace patterns and trends, we're not all that good at it - not compared to computers, at least. AlphaGo is capable of learning board games so quickly because it's able to very quickly take in and categorize a lot of data about the game, creating complex spreadsheets that outline the best possible moves in different scenarios.

In some cases, deep learning AI's ability to work out the smartest possible way to proceed based on existing data of previous attempts can almost border on precognition. A computer can, though analyzing all available data, make educated guesses about future strategies that are so complex that there's no way for humans to keep up.

Once these deep learning bots get plugged into more statistical analysis, we'll likely see people discovering shortcuts in leaps and bounds. One existing AI, for example, has been put to work in hotel booking, and has spotted a trend that certain kinds of vacationers like certain types of rooms - it's something that no human could possibly have pulled out of existing data, but AI programs are able to read between the lines to see connections that exist which may not appear obvious at first glance.



With AI finding ways to make our lives more comfortable, things could potentially end up being a lot easier for us in the future - even if it does mean accepting that most white collar work will go through the same transitional period that hit blue collar labor once automation took over the manufacturing industry.

Of course, AlphaGo's continued development also means that humans don't stand a chance in the long run of out-maneuvering robots on the battlefield. If a robot ever does decide that humanity is a threat to its continued existence, we don't stand a chance.

It might be for the best, then, if we don't do anything to upset out future robot overlords. Maybe now is a good time to stop kicking the office photocopier when it takes too long to spit out your papers.
Science
Science News
Technology
Artificial Intelligence
No