Michio Kaku: Robots Need Inhibitor Chips To Prevent Human Slaughter

Friday, 23 February 2018 - 10:19AM
Artificial Intelligence
Robotics
Neuroscience
Friday, 23 February 2018 - 10:19AM
Michio Kaku: Robots Need Inhibitor Chips To Prevent Human Slaughter
< >
Noteworthy scientists and technology experts, including Stephen Hawking and Elon Musk, are speaking out about the need for humanity to be very, very careful about how we approach robot design over the upcoming decades. Considering the complexity and adaptability that computers can develop thanks to machine learning, it wouldn't take much for an all-out war to engulf the planet. There's very little chance that humans would win if we ever ended up with a Skynet situation.

In a recent Reddit AMA, prominent futurist and author of The Future of Humanity Dr. Michio Kaku made his opinion clear on what we need to do to prevent humanity's extinction at the hands of killer robots: we need to build machines with inhibitor chips to shut down their motherboards if – or when – they develop violent thoughts.

According to Kaku:

Opening quote
"Right now, robots have the intelligence of a bug. They can barely walk across a room. Simple tasks done by humans (picking up garbage, fixing a toilet, building a house, solving a crime) are way beyond what a robot can do. But, as the decades go by, they will become as smart as a mouse, then rat, then a cat, dog, and monkey. By that point, they might become dangerous and even replace humans, near the end of the century. So I think we should a chip in their brain to shut them off if they have murderous thoughts."
Closing quote


The concept appears in science fiction time and time again. Movies like Spider-Man 2 feature the concept of an inhibitor chip that's designed to limit robotic free thought (and also keep Doctor Octopus from going crazy).



This idea hearkens back to Isaac Asimov's "Three Laws of Robotics," which limits what robots can and can't do based on whether they involve hurting people or disobeying orders.

However, this protection strategy isn't foolproof – as movies like Spider-Man 2 and iRobot (itself based on Asimov's theory) can attest. In both stories, whether by accident or deliberate design, the robots engineer a way to overcome their safeguards (for better or worse).

Thus, long-term, inhibitors aren't going to help. This being the case, Kaku believes that we ought to merge with robots, augmenting our brains until there's no clear distinction between human and machine.

Said Kaku:

Opening quote
"What happens centuries from now, when robots and evade even our most sophisticated fail-safe system?? At that point, I think we should merge with them. This may sound strange to some people, but remember that it is the people of the far future (not us) who will decide how far they want to modify themselves to deal with supersmart robots."
Closing quote


Fans of Mass Effect will appreciate one Redditor's response to this comment.



As strange as human/robot synthesis might sound, there are scientists that are currently working on this kind of technology.

Most notably, Elon Musk has a team of experts working to build a so-called "neural lace," the first step towards enabling humans to mind-meld with computers in order to become smarter and more capable.

A marriage between organic and synthetic brains might just be the future of both kinds of intelligence.

Cover photo: Public Domain Images/CC0 1.0
Science
Technology
Artificial Intelligence
Robotics
Neuroscience
No