Skip navigation
facial-recognition-adversarial.jpg Getty Images

2020 Predictions: Black Hats Begin to Target Facial Recognition

Research interest in defeating facial recognition technology is booming. Adversaries are likely taking notice, but don't expect widespread adoption overnight.

Facial recognition may not be a classic IoT use case, but the technology often follows the same basic pattern of many IoT applications. That is, a sensor-enabled networked device gathers data about an object in the physical world, digitizes it and analyzes it to trigger a potential action. In this case, an IP camera scans the face of an individual and checks its features against a database of known faces. If a wanted criminal turns up an airport, a facial recognition system could, for instance, automatically notify law enforcement. 

The volume of facial recognition technology use cases is swelling beyond surveillance. Access control is one burgeoning area for facial recognition and other types of biometrics. Amazon Go stores use facial scans. And in China, cities such as Shenzhen and Jinhua, are using the technology to enable residents to pay for public transit via a face scan. And now that facial recognition has become a mainstream method to unlock smartphones, the technology will likely gain ground for gaining access to secure portions of buildings to industrial machines. On the consumer side, Amazon’s Ring and a startup known as WUUK Labs has developed video doorbells. 

But as facial recognition technology and other types of image recognition gain ground, so is the interest in defeating it or blocking it entirely. While to date, most of that interest has been in research environments, that could begin to change in 2020. Already, machine learning researchers proved successful in defeating the image recognition mechanisms in everything from facial recognition systems to convincing traffic sign recognition systems to mistake stop signs for yield signs.  

[IoT World is North America’s largest IoT event where strategists, technologists and implementers connect, putting IoT, AI, 5G and edge into action across industry verticals. Book your ticket now.]

Recently, researchers at Facebook AI and the University of Maryland created an adversarial print on a sweatshirt that could enable its wearer to elude detection by public surveillance systems. 

Earlier this year, researchers at the Belgian university KU Leuven created an adversarial print — roughly a foot square — that enabled someone wearing it to escape identification by a person detector. Meanwhile, researchers in Russia used a similar technique — affixing a sticker to a hat — to defeat facial recognition systems.  

The barrier to entry for individuals wanting to manipulate images is quickly lowering, as McAfee’s annual predictions report explained. Websites currently exist where an individual can upload a video that will be manipulated into a deepfake version for a nominal fee or potentially for free. Nation-states will likely begin using deepfake society to sow seeds of chaos across the world, according to McAfee Chief Technology Officer Steve Grobman. 

Gorbman’s colleague Steve Povolny, head of McAfee Advanced Threat Research, foresees adversaries using deepfakes to bypass facial recognition. 

In October, VentureBeat reported on research from Facebook AI Research that succeeded in defeating video-based facial recognition systems — even in real time. 

Interest in lower-tech options to defeat facial recognition is increasing as well. A UK Wired article cites the use of 3D printed masks, bespoke textiles, makeup and infrared lights, among other strategies to defeat such systems. 

Given many hackers’ interest in privacy and in defeating emerging security controls, it’s only a matter of time before lab-based research dedicated to defeating image recognition systems spills over into the everyday world. 

Trend Micro’s Jon Clay, director of global threat communications points out, however, that techniques ranging from deep fakes to adversarial machine learning are likely still in an embryonic stage. “Based on most information we’ve seen the likelihood of broad usage of these technologies is still a ways off,” Clay said. “There have been a few instances of adversarial machine learning, which is mainly malware designed to defend itself against a machine learning technology,” he added. Video-based deep fakes are likely still a future threat, but voice-based deep fakes could be a threat in the near term. Clay envisions that an adversary could, for instance, record a voice-mail with instructions to, say, wire-transfer money. “I don’t see deep fake video used in business unless it is to hurt a public company’s stock price by manipulating something their CEO says publicly and using social media to expand the video.”

Even as techniques like adversarial machine learning and deep fakes advance, it will likely take some time before adversaries adopt them at scale. The majority of cybercriminals stick with their TTPs that “don’t require the investment and time required to utilize these newer technologies,” Clay said, referring to the acronym for tactics, techniques and procedures. “They will continue to utilize ransomware, BEC, phishing and exploits of known vulnerabilities in their attacks, simply because it still works. Until it doesn’t, we won’t see widespread adoption of these newer technologies,” he concluded.

View Original Article

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish