How Elon Musk's $10 Million Donation Is Preventing AI from Wiping Out Humanity

Wednesday, 01 July 2015 - 1:29PM
Technology
Robotics
Wednesday, 01 July 2015 - 1:29PM
How Elon Musk's $10 Million Donation Is Preventing AI from Wiping Out Humanity
Elon Musk is known for many things: founding SpaceX and Tesla Motors, trying to make hyperloop transport a reality, and generally being a real-life Tony Stark. But in recent years, he's also been known for his inflammatory statements about the dangers of AI, particularly the assertions that killer robots will arrive within five years or that creating AI is akin to 'summoning the demon.' So naturally, a few months ago he donated $10 million to the Future of Life Institute (FLI) in order to support research that would "keep AI beneficial for humanity." Now, that donation has led to grants for 37 different projects, and we finally know how that money is going to prevent AI from killing us all.

Of that $10 million, $7 million is going to grants for various research projects on artificial intelligence. But rather than trying to advance the artificial intelligence technology itself, the projects mostly focus on honing AI's decision-making abilities in order to ensure that the technology is used wisely. 

Opening quote
"There is this race going on between the growing power of the technology and the growing wisdom with which we manage it," FLI president Max Tegmark told Bloomberg. "So far all the investments have been about making the systems more intelligent, this is the first time there's been an investment in the other."
Closing quote


The known specifics of the projects, whose titles include Understanding When A Deep Network Is Going To Be Wrong and How to Build Ethics into Robust Artificial Intelligence, seem to indicate a growing concern with teaching robots to behave ethically and understand human thought processes, so they don't, say, decide to end human suffering by killing all humans. Three studies at UC Berkeley and Oxford University, for example, will help robots learn what humans would prefer them to do based on observations of our behavior. Another project at the Machine Intelligence Research Institute is getting $250,000 to develop an ethical system for robots, while a study at Carnegie-Mellon will get $200,000 to help AI explain its decisions to humans. Most ominously, a Stanford University study aims to ensure that AI-driven weapons are under "meaningful human control."

Opening quote
"Building advanced AI is like launching a rocket," FLI co-founder Jaan Tallin told Android Authority. "The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering."
Closing quote


There's also a project that aims to draft AI-related policy led by Oxford's Nick Bostrom, who famously claims that artificial "superintelligence" would form a new world order. This is possibly the least surprising news of all, as Musk has publicly stated that he was impacted by Bostrom's book.



Tegmark, even as he advises caution with new technologies, isn't quite as worried about the robot apocalypse. In fact, he claims that Hollywood's bleak, catastrophic vision of the future may distract from the real issues surrounding AI that this grant money will go towards combating. "This week 'Terminator Genisys' is coming out and that's such a great reminder of what we should not worry about," said Tegmark.

Opening quote
"The danger with the Terminator scenario isn't that it will happen, but that it distracts from the real issues posed by future AI. We're staying focused, and the 37 teams supported by today's grants should help solve such real issues."
Closing quote

Science
Artificial Intelligence
Technology
Robotics

Load Comments