AI Experts Warn the UN About the Dangers of Artificial Superintelligence

Friday, 16 October 2015 - 3:31PM
Artificial Intelligence
Science News
Friday, 16 October 2015 - 3:31PM
AI Experts Warn the UN About the Dangers of Artificial Superintelligence
On Wednesday, UN delegates met in Georgia in order to discuss emerging global risks, especially those that may arise from burgeoning technologies. Much of the discussion revolved around more commonly voiced concerns, such as chemical, biological, radiological, and nuclear (CBRN) technologies, but AI experts Nick Bostrom and Max Tegmark were also on hand to discuss the potential existential threat of artificial superintelligence. 

Max Tegmark's talk begins at 1:55, Nick Bostrom's at 2:14:


Tegmark is a cosmologist and professor at MIT, as well as a leading expert in artificial intelligence and a co-founder of the Future of Life Institute, which works to mitigate the existential risk posed to humanity by AI. Nick Bostrom is a philosopher and AI theorist at the University of Oxford, as well as the author of NYT bestselling book Superintelligence: Paths, Dangers, Strategies. He coined the term "superintelligence" to mean AI that advances to the point that it is much smarter than the human brain in all areas, and can possibly even upgrade itself so it becomes exponentially smarter than humans. 

Both Tegmark and Bostrom agreed that technology is essential to society, and can improve the human condition in many ways, there is also an imminent danger of losing control of it. Tegmark claimed that we are already on the road to superintelligence, and that AI is advancing much more quickly than programmers originally expected:

Opening quote
"Early progress in AI tended to involve... good, old-fashioned AI where some human programmers taught the machine to do something that it could then do way faster than [a human]," Tegmark told the UN. "But the most recent breakthroughs that have happened in the last five years, that people thought would take decades, has involved a completely different approach, where a machine actually learns like a child. It can take vast amounts of data and use deep learning to... learn all sorts of things that the programmer has no idea how it did it, just like your children learn to speak your language and you don't even know exactly how they did it."
Closing quote

Bostrom, for his part, expressed concern that once AI systems become more intelligent than us, we will no longer be able to control them, and therefore we won't be able to ensure that they're beneficial, or at least not detrimental, to humanity. According to Bostrom, researchers have no methods planned for controlling these superintelligent systems:

Opening quote
"There are plausible scenarios in which superintelligent systems become very powerful," said Bostrom. "And there are these superficially plausible ways of solving the control problem-ideas that immediately spring to people's minds that, on closer examination, turn out to fail. So there is this currently open, unsolved problem of how to develop better control mechanisms."
Closing quote

Bostrom went a step further, and made a claim that was essentially a variation of his "AI is more of an existential threat than global warming" bit. He claimed that human-made technology, including but not limited to AI, will pose more of a threat in the next century than any natural disaster:

Opening quote
"All the really big existential risks are in the anthropogenic category," he said. "Humans have survived earthquakes, plagues, asteroid strikes, but in this century we will introduce entirely new phenomena and factors into the world. Most of the plausible threats have to do with anticipated future technologies."
Closing quote


Via Gizmodo.
Science
Technology
Artificial Intelligence
Science News

Load Comments