fb
info@yenlo.com
API Management 4 min

How to fool visual recognition tools (and how to foolproof them)

RZW pasfoto 2020
Ruben van der Zwan
CEO & Co-Founder
visual recognition small

visual-recognitionRobots and autonomously operating devices are scary. They’ll probably outsmart us eventually and take over the world. When this happens, mankind is doomed, as you don’t argue with out-of-control monsters with superior reasoning skills. Luckily for us, this is not A Space Odyssey by Stanley Kubrick, but real life. And although future machines will be far superior in many tasks, we’re still a long way from developing such creatures (if we ever will). But that doesn’t mean we don’t have issues. When it comes to device safety, it’s still our fellow human beings that form the biggest threat. People with bad online intentions (A.K.A black hat hackers) keep on finding new ways of sabotaging autonomously operating devices that were developed to make our lives better, easier and safer. How do they do it? And more importantly: can you stop them?

Forever offline

The best way of securing your devices, data and people is to go offline forever. Obviously, malware can’t enter your organization when there’s no infrastructure. Being offline might make you feel comfortable for a day or two, but soon you’ll realize that you simply don’t function without the internet. Your employees can’t work from home, your lease cars don’t tell you when they need fixing, and you can’t even send a simply e-mail that lunch is ready. Being offline is clearly not the solution to the hacking problem. You not only need connectivity to survive; you also need it to create new services and products and update your existing ones. So instead of going offline, acknowledging that being online comes with risks, is step one. To reduce these risks, you’ll have to take serious measures to protect what’s yours. These measures will only work when you know how hackers operate and how they find your weak spots. We’ll get you started by telling about two incidents and ways they could’ve been avoided.

Stealing face

Computer programs more and more use facial recognition systems as a login method. You simply put your face in front of the camera, look straight into the lens and wait for the security system to verify it’s definitely you that wants access. Such systems screen your face using all sorts of cues that tell them which human being is standing in front of the camera. They scan bodily features like bone structure, face shape and signs of life, such as blinking and smiling. Waterproof? Not by far. Recently, security and computer vision specialists from the University of North Carolina showed they could fool facial recognition systems by applying a 3-D facial model on photos they found online. And as most people barely control their online presence and share photos of themselves through different online channels, matching login IDs with the right face turned out to be a piece of cake.

Hacking self-driving cars

Now that the online world is integrated with the physical one through connected devices, the playground of hackers has expanded. No matter the device; if it’s connected with the internet, it can be hacked. A research team of several American universities discovered a way to have self-driving cars misinterpret street signs, by making small adjustments through stickers and graffiti. This way, they managed to turn a stop sign into a 45 mile per hour sign, which could lead to fatal accidents. So what happened here? Self-driving cars are taught to read cues and act upon them. These cues can be street signs, moving objects, and weather conditions such as rain or snow. Based on the many examples of cues it has seen in the past, the self-driving car recognizes what’s in front of him and responds according to a pre-defined set of rules. Meaning that if it “sees” a stop sign, it will stop. However, as the aforementioned researchers unraveled the algorithm for stop signs, they knew which changes to make to the sign to make it look like something else. As the system does not look at the sign itself but aggregates it to a likelihood that something is a traffic sign (0.99p) a dog (0.33p) or an airplane (0.01p), it could be misled.

Context

What did we learn from this? Will connected devices and cameras always have weak spots that hackers can use to make our lives miserable? In theory: yes. There’ll always be a game of leapfrog between the good guys and the bad guys. However, more research on the development and testing process of new technologies will definitely limit the number of opportunities to hack. Moreover, facial recognition and self-driving cars are both recent developments and still have a lot to learn. Self-driving cars, for example, shouldn’t just step on the gas when they see a 45 mile per hour sign; they should realize they’re on a small country road that is about to merge with a highway and they should know there’s a truck coming their way from behind a row of trees. As for facial recognition methods, researchers currently work on infrared screenings to make better facial analyses. This way, the camera won’t be fooled by a high-resolution photo that’s been put on a 3-D face model. Last but not least, systems must not only learn to recognize objects or faces; they should also learn about all the things they don’t see. This way, self-driving cars will know there’s no 45 mile per hour sign, as there’re still some cues missing.

Do you believe in the future of safe visual recognition? Let us know by leaving a comment!

Want to know more about the crazy but wonderful world of connected devices? Download our book “Switch on your IoT”.

Whitepaper:
Full API lifecycle Management Selection Guide

whitepaper hero
Get it now
eng
Close
We appreciate it
Care to share

Please select one of the social media platforms below to share this pages content with the world