Silicon Valley Leader on Why Artificial Intelligence Probably Won't Kill Us All
More than a few of the greatest minds in the world are concerned about AI becoming superintelligent and potentially wiping out humanity; Stephen Hawking has gone on record saying that AI could spell the end of the human race, Oxford philosopher Nick Bostrom wrote an entire book on the topic, and Elon Musk seems to say something alarmist about artificial intelligence every week. But many experts disagree that AI is such an imminent fear, including Silicon Valley leader and founder of Evernote Phil Libin. In a recent interview on The Tim Ferriss Show, Libin asserted that concerns about AI were dependent on an invalid argument which insists that wiping out humanity is the "smart" thing to do:
I feel like there's a couple of steps missing in that chain of events. I don't understand why the obviously smart thing to do would be to kill all the humans. The smarter I get the less I want to kill all the humans! Why wouldn't these really smart machines not want to be helpful? What is it about our guilt as a species that makes us think the smart thing to do would be to kill all the humans? I think that actually says more about what we feel guilty about than what's actually going to happen.
The wording of his statement, "the smarter I get, the less I want to kill all the humans!" is perfect, and very Internet. But beyond that, it's an interesting point that we often take for granted that superintelligent beings, whether alien or synthetic, would decide based on superior logic that the world is better off without humans. Although I would probably argue that ruining the environment is a pretty egregious offense, his point is well taken that this assumption is as much the result of our guilt as any type of logical thought process:
I think there are a lot of important issues that are being sublimated into the AI/kill-all-humans discussion that are probably worth pulling apart and tackling independently ... I think AI is going to be one of the greatest forces for good the universe has ever seen and it's pretty exciting we're making progress towards it.
Regardless of whether humans actually deserve to be wiped out of existence, he's absolutely right that we should examine why we would feel that way and try to fix it. And he also may be correct that we're galloping towards true AI at breakneck speed, especially if those mildly self-aware NAO robots are any indication.