Oxford Professor Claims AI Superintelligence Will Form New World Order

Tuesday, 19 August 2014 - 10:16AM
Technology
Tuesday, 19 August 2014 - 10:16AM
Oxford Professor Claims AI Superintelligence Will Form New World Order

The work of philosophy professor at the University of Oxford Nick Bostrom often sounds like science fiction; for example, he published an article in 2003 which claimed that "we are almost certainly living in a computer simulation," and another in 2011 which asserted that, since a human can only affect a finite part of the universe, according to standard arithmetic, ethics don't exist (subtracting or adding a finite amount of "good" from an infinity leaves the infinity unchanged). Now, in his new book Superintelligence, he claims that there will likely come a time that robots take over the world and possibly kill us all. 

 

The concept of a "superintelligence" is an old one in the artificial intelligence community; it conceives of an artificial intelligence that is self-aware enough to make improvements to itself, thereby setting off an exponential "intelligence explosion." This AI's potential for increased intelligence would theoretically be infinite, so they would obviously be more intelligent than humans. According to Bostrom, the pertinent question for the human race is whether these machines will remain "tool" machines for human purposes, or whether it is inevitable that they will develop their own agendas and desires, even if we don't program those features ourselves.

 

In an interview with Vox, Nostrom claimed that the first step in his argument is simply that "at some point we will create machines that are superintelligent, and that the first machine to attain superintelligence may become extremely powerful to the point of being able to shape the future according to its preferences." The second step, which expands on the latter point, asserts that "it looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about." In other words, if we do create a superintelligence, then it will pose quite a challenge to make them conform with Asimov's laws of robotics and potentially kill or enslave the entire human race.

 

When asked why, in light of the difficulty found to be inherent to programming certain features of intelligence, such as creativity, Nostrom is confident that we will be able to create this superintelligence, he replied, "We did a survey of opinions among some of the leading experts in artificial intelligence, which I report in the book. One of the questions we ask was, 'by what year do you think there is a 50 percent probability that we will have human-level machine intelligence?' And the median answer to that was 2040 or 2050, depending on precisely which group of experts we polled."

 

He also responded to the arguments of outspoken AI critic Hubert Dreyfus. Dreyfus claimed, among other things, that building a computer that behaved like a human brain was virtually impossible, as computers are digital, and so operate according to discrete numerical functions, and the brain appears to be analog, or operating according to non-quantized signals. According to Bostrom, "We have an existence proof of general intelligence. We have the human brain, which produces general intelligence. And one pathway towards machine intelligence is by figuring out how the human brain accomplishes this by studying the neural network that is inside our heads. We can, perhaps, discern what algorithms and computational architectures are used in there and then do something similar in computers... An even more radical approach is to literally copy a particular human brain, as in the approach of whole-brain emulation. We would not have to understand at any higher level of description how the brain produces intelligence. We would just need to understand what the components do."

 

He also referred to the possibility of creating a cybernetic brain in humans, a common theme in the field of superintelligence. Many researchers believe that we may be able to enhance processing speed and memory by adding mechanical parts to an organic brain. Bostrom puts a slight spin on this, and asserts that humans with enhanced intelligence may be able to solve the problems that AI researchers currently find to be intractable: "Another consideration is that even if it were, which I do not believe it is, outside the reach of current human intelligence to conceive and engineer artificial intelligence, human biological intelligence is itself something that can be enhanced, and I believe will be enhanced, perhaps in latter half of this century. So we also need to consider that there could be these much enhanced human scientists and computer scientists who will be able to make progress even if we were stumped."

 

Above all, Bostrom asserts that we might create a superintelligence by accident, and particularly may create an intelligence with its own (possibly misanthropic) goals unintentionally: "One approach that one might think would obviously be the safest bet is to try to engineer a system that would not be of this agent-like character: that is, to try to build a tool AI. It is still worth exploring that further, but it's a lot less obvious than it looks that it actually is a fruitful avenue. For a start, you might end up with an agent even if you didn't set out to create it. So, if there are these internal processes within the system that amount to very powerful search processes for plans, it might well be that one of these internal processes will exhibit agent-like behavior, even if you didn't define the whole system to be an agent. And these agent-like processes that emerge spontaneously might then be more dangerous, because they were not put in on purpose."

 

[Credit: Warner Bros. Pictures]

 

Most interestingly, he speculated that there would be one aggregate "new world order," in which all of the superintelligent beings acted as one agent: "Where the transition to the machine intelligence era is very rapid, and you have an intelligence explosion at some point, [then] it is likely that there will be one superintelligence able to form a singleton, a world order where at the highest level of decision making there is only one agent. Say you go from human-level to superintelligence within four hours or two days or some very short period of time like that. In that case, it is very unlikely that there will be two or more development teams that undergo this transition in parallel. Usually, the leader is at least two weeks or months ahead of the next best project. In that case, you might have this singleton outcome, where there is one thing that will shape the future according to its preferences."

Science
Artificial Intelligence
Technology

Load Comments