Skip navigation
Face

Microsoft Calls for Regulation to Address Facial Recognition Issues--Will Others Follow?

Microsoft says the emerging technology must be legislated, but that may be a hard sell even as facial recognition issues increase.

A call for regulation to address facial recognition issues recently came from an unlikely source: Microsoft. In a note by president Brad Smith, and posted to Microsoft’s On The Issues blog on July 13, the tech company put out a call for regulation of the use of computer-assisted facial recognition, which raises issues that cover everything from privacy rights to technological bias and the misuse of police powers.

“These issues heighten responsibility for tech companies that create these products,” Smith wrote in the post. “In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.”

The call for regulation from one of tech’s biggest companies is both unusual and welcome, said Kentaro Toyama, W. K. Kellogg Associate Professor at the University of Michigan School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of Technology.

Tech companies rarely call for regulation of technologies they themselves are developing, as Microsoft is with facial recognition technology, said Toyama, who previously worked at Microsoft doing research on facial recognition and related technologies. But Microsoft’s move to break that mold recognizes both that this technology should be regulated and that current legislation needs to catch up, he said.

Rapidly Advancing Technology

“Face recognition is a good place for policy to start because the technology is easy for the public to understand and functionally contained," said Toyama. "So discussions about its societal impact are straightforward, even if the decisions might not be.”

Both the abilities and applications of facial recognition technology have expanded rapidly in the past decade--far more quickly than any legislation that would apply curbs or prescriptions to its use. Computer vision can now recognize people’s faces from an image or video more quickly and accurately than ever before, and digital cameras and sensors have become more advanced. Machine learning is an area of artificial intelligence that is important for facial recognition, and that, too, has advanced recent years. Faster processing speeds and cloud storage mean data can be accessed more quickly, from anywhere. And recent decades have seen an explosion of digital data--the kind that facial recognition systems use to learn and work.

Some of the uses of facial recognition technology have become common--for example, to recognize faces of your friends in your Facebook albums or from photos taken on your tablet.

Other uses of facial recognition technology are less obvious. For example, there are law enforcement agencies in the United States that use facial recognition to fight crime and narrow down suspects. In China, facial recognition technology is increasingly being used in public spaces to surveil citizens. In the American example, there is a lack of regulation; in the Chinese one, the regulation supports the technology’s use.

Who Decides When the Technology Is Used for Good?

In a democracy, decisions made by elected officials are key to balance public safety and freedom, the Microsoft blog said.

“Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives,” Smith wrote. Legislation is an important part of defining that role, Microsoft argues.

Some of those uses are positive. For example, facial recognition technology was used to identify the suspect in the recent shooting at a Maryland newspaper when he would not provide that information himself. But there are ways that facial recognition technology can be intentionally exploited, or unintentionally used in ways that cause harm.

As advanced as the technology has become, it is not perfect and still contains the biases inherent in a tech built by humans. In their call for regulation, Microsoft pointed out that it’s been shown that facial recognition technology works more accurately for white men than for white women, and for people with lighter versus darker complexions. This is a concern when the technology is used, for example, in policing, as research has shown that racialized communities are more likely to be over-policed.

Also, privacy laws in North America have not kept up with technological advances that open new frontiers of data collection--and privacy violation. In July it was reported that shopping centers in several large Canadian cities were using facial recognition technology to track information about visitors, without their notification or consent. That news is just one example of how facial recognition technology can be used to surveil in ways that are perhaps largely benign, but potentially not.

“Face recognition can raise issues of intrusions on privacy, a tighter surveillance state, racial or gender discrimination (based on a less-than-perfect technology), unwanted commercial use of data, bureaucratic errors (due to misclassification), and so on,” Toyama said. “All of these things need to be regulated if we want to preserve citizen rights to privacy, political equality and a balance of power against corporations.”

How Other Tech Companies May React

It seems inevitable that regulations will affect the way facial recognition systems can be used in the future. So why would Microsoft issue a call for regulation when it is also working on these technologies? There are several possible explanations, said Toyama, who suspects their motivation is multifaceted.

The company’s face-value explanation--that it wants to do the right thing and help reduce the negative impacts of a powerful technology--could play a part, he said. Microsoft’s rivals are coming under increasing scrutiny--think of Facebook, which in July boosted investment into AI research even as concerns continued about its use of data--and they may be looking to set themselves apart from the tech pack.

“Yet another [possible explanation] is that if the company is going to claim a moral high ground and limit its use of facial recognition technology for positive purposes only, then it helps the company compete, if other companies are held to the same standard,” Toyama said. It may not look like it on the surface, but the company may have a strategic reason for taking this stand.

Microsoft is not alone in its call for regulation among tech peers.

Brian Brackeen, CEO of facial recognition software developer Kairos, wrote in a June op-ed that facial recognition technology wasn’t good enough yet for police use. SpaceX’s Elon Musk told U.S. governors last year that they had to be proactive in AI regulation, rather than reactive.

And more recently, Amazon challenged an ACLU study that incorrectly matched mug shots to members of the U.S. Congress using the company’s Rekognition facial recognition technology, which is sold to police forces. But as members of Congress demanded answers from Jeff Bezos, Amazon also republished its blog post defending its product with a call to regulators to ensure its proper use. It’s a sign of the shifting tide of opinion on the use of this technology.

It’s hard to say at this point which--if any--tech vendors will decide that the business-morality trade-offs mean supporting Microsoft’s call for regulation, or if those laws will come, Toyama said: “Will other large tech companies get on board? They should, both because it's the right thing to do and because it will help them shape the regulation if they're at the table.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish