Nick Bostrom: Technology is a Bigger Threat to Humanity than Global Warming

Monday, 06 October 2014 - 1:58PM
Technology
Monday, 06 October 2014 - 1:58PM
Nick Bostrom: Technology is a Bigger Threat to Humanity than Global Warming

Science fiction has offered all sorts of creative and entertaining explanations for the end of humanity, from killer asteroids to malevolent aliens to global infertility. But if asked how humanity would meet its demise in real life, most people would cite global warming as the likely cause. But Nick Bostrom, head of the Future of Humanity Institute at Oxford, would disagree, as he argues that humanity is much more likely to be threatened by various "sci-fi" technologies than by global warming. 

 

Bostrom doesn't deny that global warming exists or that its effects will be devastating on some level, but asserts that the effects are relatively unlikely to constitute a threat of extinction. "Global warming is very unlikely to produce an existential catastrophe," Bostrom said to AlterNet. "The Stern report says it could hack away 20 percent of global GDP (Gross Domestic Product). That's, like, how rich the world was 10 or 15 years ago. It would be lost in the noise if you look in the long term."

 

Instead, he believes that the most likely threats to humanity's survival are burgeoning technologies such as bioengineered pandemics, nanotechnology, and superintelligence. Nanotechnology is already in use for a myriad of different applications, from storing electricity in everyday materials to fighting against the recent Ebola outbreak. Millions of dollars have been invested in nanotechnology research, and some theorists claim that in the future, if the advancement of nanotech goes unchecked, then we could create tiny, self-replicating machines that are capable of consuming all the matter on Earth in order to make more of themselves. This vision of the apocalypse is called the "grey goo" scenario. However, there is not a clear consensus in the scientific community that self-replicating robots are even possible, and if they were, they would likely be decades away. 

 

A bioengineered pandemic also may not be entirely outside the realm of possibility. University of Wisconsin scientists recently reverse-engineered the swine flu, prompting many in the scientific community to denounce their research as "putting the global population at risk." Martin Rees of the Centre of Existential Risk at Cambridge disagrees with Bostrom that global warming is not an existential threat, but is also extremely concerned with the state of the biotech industry. "My worst nightmare," said Rees, "is the lone weirdo ecofanatic proficient in biotech, who thinks it would be better for the world if there were fewer humans to despoil the biosphere." 

 

Bostrom is most concerned with superintelligence, or the possibility that we will create an artificial intelligence that has the ability to make improvements to itself, which will lead to an exponential explosion of intelligence that will render humanity completely at the mercy of the far superior beings we've created. Considering that even the AI that passed the Turing Test was relatively primitive and unimpressive compared to a human brain, this may very well be the least likely doomsday scenario. But Bostrom is concerned with it partially because the field of artificial intelligence has the potential to be humanity's salvation as well as its downfall. If we were able to create an AI that had the capacity of a human brain, then people could build mechanical replicas of their brains and live as immortal machines.

 

"The stakes are very large," said Bostrom. "There is this long-term future that could be so enormous. If our descendants colonized the universe, we could have these intergalactic civilizations with planetary-sized minds thinking and feeling things that are beyond our ken, living for billions of years. There's an enormous amount of value that's on the line."

Science
Artificial Intelligence
Technology

Load Comments