EPFL Scientists Say New Memory-Deleting Method Could Stop AI From Destroying Humanity (Even the Terminator)

Thursday, 22 March 2018 - 11:25AM
Technology
Artificial Intelligence
Robotics
Thursday, 22 March 2018 - 11:25AM
EPFL Scientists Say New Memory-Deleting Method Could Stop AI From Destroying Humanity (Even the Terminator)
< >
Image credit: YouTube

The problem with a giant red "OFF" button on an incredibly powerful artificial intelligence is that the AI will eventually try to circumvent it. Likewise, an AI that uses machine learning has the potential to "learn" how to anticipate and counter attempts to interfere with its programming. However, a new method developed by scientists at the École Polytechnique Federale de Lausanne (or EPFL) allows humans to delete portions of an AI's memory without changing the way it learns.



This is crucial, says EPFL's Distributed Programming Laboratory member Rachid Guerraoui.

 

"AI will always seek to avoid human intervention and create a situation where it can't be stopped," says Guerraoui. "The challenge isn't to stop the robot, but rather to program it so that the interruption doesn't change its learning process—and doesn't induce it to optimize its behavior in such a way as to avoid being stopped."

 


The researchers compare the method to the neuralyzer device in the sci-fi movie Men in Black, which erases and rewrites a portion of a person's memory without harming them physically. "Simply put," says El Mahdi El Mhamdi, another researcher attached to the research, "we add 'forgetting' mechanisms to the learning algorithms that essentially delete bits of a machine's memory." This ability to lobotomize an AI without it realizing what's happening sounds like something out of Black Mirror, but it may be the ultimate fail-safe for runaway machine learning.  


However, this new method runs into the same problem the neuralyzer does: wiping one person (or robot)'s memory is easy, but what about dozens or even hundreds? Devices like self-driving cars may end up using machine learning algorithms to learn from one another in order to anticipate other cars' movements, meaning that if some of them start to develop "bad" behaviors (especially from watching humans make non-optimal driving decisions), the phenomenon may spread.



However, researcher Alexandre Maurer says the new method could handle it. "We worked on existing algorithms and showed that safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption. We could use it with the Terminator and still have the same results."



That's a bold claim, Maurer. We'll be depending on you when Skynet finally strikes down Elon Musk, the way he always feared.

Science
Science News
Technology
Artificial Intelligence
Robotics
No