Oxford Professor Claims AI Will Take Over the World in the Next Few Decades

Monday, 29 June 2015 - 4:04PM
Monday, 29 June 2015 - 4:04PM
Oxford Professor Claims AI Will Take Over the World in the Next Few Decades
From Ex Machina to Avengers: Age of Ultron to AMC's Humans, recent science fiction has been all too concerned with machines that reach the singularity, or the point at which they are more intelligent than humans and potentially revolt against us. While many experts claim that we're nowhere near that kind of technology, an Oxford academic begs to differ. Dr. Stuart Armstrong of the Future of Humanity Institute at Oxford University claims that our increasing dependence on AI will lead to superintelligent machines that take over the world from humans, possibly in the next few decades. 

Opening quote
"Humans steer the future not because we're the strongest or the fastest, but because we're the smartest," Armstrong told The Telegraph. "When machines become smarter than humans, we'll be handing them the steering wheel."
Closing quote


And he doesn't envision an individual sentient AI like Ex Machina's Ava, but even more terrifyingly, an army of superintelligent robots. He predicts that AI will eventually be able to harness huge amounts of computing power, allowing it to create a global network called Artificial General Intelligence (AGI), through which robots all over the world could communicate without any help or interference from humans. If AGI becomes a reality, it would be able to control any system that runs on computing power, such as transport systems, national economies, financial markets, healthcare systems, and even aspects of the military. 

"Anything you can imagine the human race doing over the next 100 years there's the possibility AGI will do very, very fast," said Armstrong.

Armstrong also warned against the possibility of AIs misinterpreting human instructions with catastrophic results. If they are smarter than humans, and yet less adept at understanding the nuances of human language, then they will interpret instructions very literally and possibly disastrously. For example, he warned that "prevent all human suffering" could be interpreted as "kill all humans," or that "keep humans safe and happy" could translate to "entomb everyone in concrete coffins on heroin drips."

Opening quote
"There is a risk of this kind of pernicious behaviour by a AI. You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."
Closing quote


His comments come at a time when Chinese AI researchers are claiming to have created a program that beats humans on a verbal IQ test, precisely because they've placed more emphasis on understanding the nuances of the relationships between words. And furthermore, it seems that even if a robot did understand the subtleties of the English language, these misunderstandings are more the result of a robot not thinking in a human-centric way. 

So the answer, it seems, would be to come up with some kind of "morality" that would prevent it from killing, enslaving, or brainwashing all humans on principle. Unfortunately, humans have spent the entirety of history debating moral issues, and even when we have a virtual consensus on a moral question, we're still not very good at following our own dictums. 

"Humans are very hard to learn moral behavior from," said Armstrong. "They would make very bad role models for AIs."
Science
NASA

Load Comments