A Stanford Artificial Intelligence Researcher on Why We Don't Need to Worry About Evil Robots
So, just to give you whiplash, we're going to tell you about one expert who thinks that the fear mongering about AI is completely overblown, on the same day we told you that it's one of the biggest existential risks to humanity, according to some experts. But according to Stanford artificial intelligence researcher Andrew Ng, who makes his living creating AI, most experts who are afraid of the technology aren't actually working on it.
Speaking to Fusion, Ng claimed that theorists like Stephen Hawking and Elon Musk are obviously geniuses in their own right, but they may not have as much knowledge about what modern AI technologies actually look like. "For those of us shipping AI technology, working to build these technologies now. I don't see any realistic path from the stuff we work on today-which is amazing and creating tons of value-but I don't see any path for the software we write to turn evil."
"Computers are becoming more intelligent and that's useful as in self-driving cars or speech recognition systems or search engines. That's intelligence," he said. "But sentience and consciousness is not something that most of the people I talk to think we're on the path to."
But, to be fair, in these statements Ng is only really speaking about AI at this particular moment. According to Moore's law, the number of transistors on an integrated circuit doubles every two years (although some have cited the statistic as 18 months), which essentially means that technology is expected to advance exponentially, rather than steadily, for many years. When asked to apply Moore's law to his thought process, and to project 40 years and predict whether AI could possibly achieve human-like intelligence or sentience then, Ng said:
"I think to get human-level AI, we need significantly different algorithms and ideas than we have now." He used the example of English-to-Chinese translation systems, which have essentially read all of the English-Chinese texts in the world, "way more language than any human could possibly read in their lifetime." And yet they are far inferior at accurately translating a text than humans who have read only a minuscule portion of the texts in existence. "So that says the human's learning algorithm is very different."
He goes on to explain that while it's very possible we will achieve human-like artificial intelligence in the distant future, it's so far away that it's rendered a moot point. "I don't work on preventing AI from turning evil for the same reason that I don't work on combating overpopulation on the planet Mars. Hundreds of years from now when hopefully we've colonized Mars, overpopulation might be a serious problem and we'll have to deal with it. It'll be a pressing issue. There's tons of pollution [on Mars] and people are dying and so you might say, 'How can you not care about all these people dying of pollution on Mars?' Well, it's just not productive to work on that right now."
"Maybe hundreds of years from now, maybe thousands of years from now-I don't know-maybe there will be some AI that turn evil," he said, "but that's just so far away that I don't know how to productively work on that."
Interestingly enough, he also disagreed with all the hype surrounding creativity as a new holy grail for "true" artificial intelligence. With the Turing test losing credibility, many AI theorists are eschewing the notion that an AI should be able to pass for human, and are instead attempting to create a computer than can generate original thought. The Lovelace Test, for example, is a test of AI creativity that is proposed as a replacement for the Turing test, but Ng claims that "creativity as intelligence" may be a false equivalence as well:
"I feel like there is more mysticism around the notion of creativity than is really necessary. Speaking as an educator, I've seen people learn to be more creative. And I think that some day, and this might be hundreds of years from now, I don't think that the idea of creativity is something that will always be beyond the realm of computers. When machines have so much muscle behind them that we no longer understand how they came up with a novel move or conclusion, we will see more and more what look like sparks of brilliance emanating from machines."