Image Recognition AI Is Easily Fooled Into Thinking a Turtle Is a Rifle

Thursday, 02 November 2017 - 7:28PM
Technology
Artificial Intelligence
Thursday, 02 November 2017 - 7:28PM
Image Recognition AI Is Easily Fooled Into Thinking a Turtle Is a Rifle
Labsix
As world leaders embrace autonomous robot warfare with disconcerting enthusiasm, it's hard not to wonder about how things could go terribly wrong. Robot manufacturers are eager to argue that their bots are capable of accurately identifying a hostile situation, but many people still worry about what might happen when a future security robot incorrectly spots a deadly weapon where there is none.

Or, to put it bluntly, we're all just a little bit frightened that the ED-209 boardroom scene from Robocop will become a reality, and a deadly robot will start seeing guns that aren't there. If that scene gave you nightmares as a kid, then bad news - researchers have found a way to bring this scene closer to reality, albeit thankfully without an actual killer robot. Just an AI program, and using a model of a turtle, interestingly enough.

Through a relatively simple exploit that involves changing just one pixel on an image recognition AI's input, the researchers were able to fool the artificial intelligence into thinking that video footage of a 3D printed turtle was actually a rifle.

 

The success rate for this trick was 74%, although if the researchers changed five pixels on the screen instead of just one, the AI incorrectly identified a rifle 87% of the time. Such a tiny exploit, simply applied, could potentially have wider implications, like altering an entire army of robots' logic systems to the point that they could raise alarms over literally anything - or anyone - that hackers choose.


Military hardware isn't the only equipment that can be hacked to dangerous results - some industrial equipment, as well as several commercially available children's toys, have proven fallible to malware that could turn them into deadly dangers. Of course, nobody's going to feel too threatened by a pint-sized robot; the idea of killer war droids suddenly firing at the wrong targets, though, is a far more terrifying thought - not least because it only hints at the full danger of the situation.
 
America, for example, now has a Linux-run battleship that can identify and automatically respond to hostile threats as it's traveling in stealth mode. A small adjustment to the ship's image recognition protocols would be all it would take to convince the ship to start blowing up more civilian targets out of a misguided sense of self-preservation.

The problem with all technology, as we've seen time and again, is that no security system is flawless. If even the world's largest banks and commercial companies can't protect against every single possible hacking attempt, then the world's militaries don't stand a chance.

This new research comes with an important lesson: no matter how safe you feel, no matter how sure you are that a robot warrior is safe to be around, maybe still try not to stand directly in its line of fire. That, and just pray that Robocop is around if ever you run into the real world equivalent of ED-209. Especially if you're holding a turtle.

Science
Science News
Technology
Artificial Intelligence
No