Guidelines for Preventing an AI Takeover Endorsed by Musk and Hawking

Saturday, 04 February 2017 - 2:23PM
Artificial Intelligence
Saturday, 04 February 2017 - 2:23PM
Guidelines for Preventing an AI Takeover Endorsed by Musk and Hawking
YouTube/Future of Life Institute
Two of modern science's most powerful voices, Elon Musk and Stephen Hawking, have both issued warnings about the dangers of artificial intelligence in the past (Musk has even been tinkering with ways humanity can augment themselves to keep up). But good news: Musk and Hawking are jumping onboard the ethical AI bandwagon.

In an open letter published by Future of Life Institute (FLI) last Monday, Musk and Hawking joined several AI and robotics researchers in a comprehensive outline called "Asilomar AI principles" - 23 guidelines for avoiding an artificial intelligence armageddon. The goal is to guide AI research toward beneficial intelligence rather than "undirected intelligence." The principles are the product of the FLI's 2017 Beneficial AI conference.

The principles fall into three categories - research issues, ethics and values, and long term issues for the safe and ethical use and development of AI. According to Newsweek, the Asilomar AI principles have already been supported and signed by over 700 Ai and robotics experts, including Demis Hassabis from Google's DeepMind, and Stefano Ermon from Stanford's Comp-Sci department. Ermon commented:
Opening quote
"I'm not a fan of wars and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential and, even just with the capabilities we have today, it's not hard to imagine how it could be used in very harmful ways." 
Closing quote


Most principles in the outline involve AI's potential to become increasingly dangerous as it gains more intelligence and autonomy, and the initial design steps necessary to prevent that superintelligence. Some examples are principle number 11: "AI systems should be designed and operated so as to be compatible with ideals on human dignity, rights, freedoms, and cultural diversity." Or number 9, which places the burden of responsibility on designers: "Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications."

Some are more ominous, such as principle number 18, which advocates for avoiding an international "arms race in lethal autonomous weapons." Or number 22: "AI systems designed to recursively self-improve or self-replicate in a matter that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures." However, it gives no indication of what these "strict safety and control measures" would look like or how they would be measured. 

The Future of Life Institute's viewpoint is more positive than bleak - instead of focusing on avoiding a possible arms race, FLI hopes their principles can be a basis for "vigorous discussion" and goal-setting that can help AI improve lives instead of threaten them. An FLI spokesman told Newsweek that the Asilomar AI principles are "certainly open to different interpretation, but also highlight how the current 'default' behavior around many relevant issues could violate principles that most participants agreed are important to uphold." 

Hopefully more and more robotics and AI creators, investors, and enthusiasts will sign this letter, leading to a collective sense of responsibility and ethical and safety precautions for future AI development. And with Musk's current influence in politics, we hope that these principles eventually make their way up to the top, so that we can hopefully avoid any incidents or AI takeovers. You can see the full AI principles here.

Here's some footage of Musk and other researchers speaking at the Beneficial AI conference:


Science
Technology
Artificial Intelligence