Engineers Develop Ingenious Method to Defeat AI Surveillance... With a Color Printout

Wednesday, 24 April 2019 - 9:24AM
Technology
Wednesday, 24 April 2019 - 9:24AM
Engineers Develop Ingenious Method to Defeat AI Surveillance... With a Color Printout
< >
YouTube/Anonymous CVCOPS
You've may have noticed that Outer Places regards the implementation of AI-based surveillance – whether in the form of facial recognition, emotional recognitionbehavioral analysis, or any other ostensibly passive intrusions into the daily lives of people trying to navigate the muddied waters of a society simultaneously obsessed with insta-fame and privacy – with more than a little skepticism. Whether it's because we generally tend to distrust powers that distrust us or because we've spent enough professional time online to know that there's a dark side to the mediasphere, we tend to lean towards caution when it comes to widely celebrated technological advances, particularly in Artificial Intelligence.


Conversely, we delight when people find ways to disrupt technology: when human ingenuity triumphs over technology. Suffice it to say, we were quite pleased when we learned that engineers from the University of KU Leuven (Belgium) developed a way – specifically, an adversarial attack – to effectively disrupt object detection AI powered by the YOLOv2 algorithm. They published their findings on Arxiv last week in a paper titled "Fooling automated surveillance cameras: adversarial patches to attack person detection." They were even kind enough to provide the source code.


The method is simple: it deploys a multi-color computer-printed "patch" that confuses the AI, which is trained to recognize objects – and a human is really just another object to an AI system – by feeding it hundreds of thousands of images of similar objects. That methodology leaves a system open to exploitation. As the researchers write,


"Understanding exactly why a network classifies an image of a person as a person is very hard. The network has learned what a person looks likes by looking at many pictures of other persons. By evaluating the model we can determine how well the model works for person detection by comparing it to human annotated images. Evaluating the model in such a way however only tells us how well a detector performs on a certain test set. This test set does not typically contain examples that are designed to steer the model in the wrong way, nor does it contains examples that are especially targeted to fool the model. This is fine for applications where attacks are unlikely such as for instance fall detection for elderly people, but can pose a real issue in, for instance, security systems."


Simply put, the patch tricks the AI into not recognizing the object. The object becomes effectively invisible to the system. 


If this sounds familiar, it's because it's similar to other adversarial attacks developed to defeat object detection AI, including Simon C. Niquille's REALFACE Glamoflage t-shirts, which were able to defeat a certain social media giant's (fun fact: whenever we write anything critical about this company whose name sounds like BaseFook, they choke our reach... one might think they have an algorithm in place to detect such thoughtcrimes) facial recognition in 2013. 


Although the patch only works on YOLOv2 for now, it raises the possibility of other, similar techniques that might be used to confound AI systems short of destroying video cameras with not-flamethrowers. In an interview with MIT Technology Review, Wiebe Van Ranst, who co-authored the paper, said that the possibility of creating adversarial attacks was dependent on the algorithm deployed. "At the moment we also need to know which detector is in use," Van Ranst told MIT. 


Opening quote
What we'd like to do in the future is generate a patch that works on multiple detectors at the same time. If this works, chances are high that the patch will also work on the detector that is in use in the surveillance system.
Closing quote



Beyond privacy concerns in an age of increasing surveillance – including massive efforts put forth by China and other governments that aim to stifle, if not crush, dissent – this research also represents the importance of red-teaming when it comes to creating security measures. If you're not considering your security – whether physical, technological, or otherwise – with the same creativity of someone with an incentive to breach it, then it's not a question of if, but rather when your systems will fail. 


The authors will be presenting their paper at CV-COPS 2019 on June 16th in Long Beach, California. 

Science
Artificial Intelligence
Technology