In 2012, a University of Toronto professor and graduate students built a neural network that could accurately use thousands of photos to teach itself to identify common objects. By 2018, facial recognition systems were being used to identify a suspect in the shootings at The Capital Gazette's newsroom, in Annapolis, Md. But as facial recognition systems advance and becomes less expensive, the technology's use is widening--and new ethical concerns are developing. The suspect in the Annapolis shooting refused to give police his name after he was apprehended, so law enforcement officials used facial recognition to identify him more quickly than they could through his fingerprints. The use of facial recognition technology is increasing in law enforcement across the United States, including by local police forces.
According to a report from the Georgetown Law Centre on Privacy and Technology, 16 states now allow the FBI to compare the faces of suspected criminals with identification photos using facial recognition sysyems, affecting a potential 64 million people. Georgetown Law points out that that the practice is largely unregulated, and that, while it does have benefits, legal guidelines are needed.
There are civil liberties concerns about the tech’s increasing use across all levels of policing, and criticism has been leveled at companies that have worked with those agencies--including Amazon, whose Rekognition facial recognition system is used by police and touted as a way to identify both license plates and people in crowds.
Swift Growth of Technology
Amazon introduced Rekognition in late-2016. Rekognition is designed to identify objects or people in photographs, and the company quickly started to pitch the cloud-based service to law enforcement agencies. The use of the service widened, but so did the backlash: By May of this year, more than two dozen civil rights organizations--including the ACLU--asked that the tech giant stop selling the service to police.
“These technologies threaten to subject us to perpetual, dragnet surveillance in which we are nonconsenting subjects in a never-ending series of investigations,” the ACLU said in a statement on the practice.
This kind of facial recognition technology is now widely available to police agencies and the general public alike: A photo or video image of a face can be matched against a database of faces and matched with high accuracy. Facial recognition systems even exist on the smartphones many of us carry around in our pockets, some of which can now identify and tag specific people in photographs based on past tagging.
Facial recognition systems are also available on a large scale to a variety of government and law enforcement agencies. In the case of identifying the shooter in the Annapolis case, Jarrod Ramos, police took his photo while he was in custody and submitted it to the Maryland Coordination and Analysis Centre, which used the state’s Image Repository System containing mug shots and driver’s license photos to find a match.
Concerns around Ethics and Discrimination
Some concerns come when such databases are used in a less directed fashion; even if people are never falsely fingered through the use of facial recognition technology, they’re still subject to surveillance to which they may not have consented. And while the technology is increasingly accurate, mistakes are possible, especially when lower-quality images are used.
For example, the Verge reported in June that the United States will begin a pilot program in August that launches firms who make and sell facial recognition systems into the middle of the ongoing debate about migrant crossings into the United States.
According to the Verge, Customs and Border Protection will deploy in August the Vehicle Face System, which will be used for scanning drivers’ faces as they leave the United States. VFS "is planned for installation at the Anzalduas border crossing at the southern tip of Texas and scheduled to remain in operation for a full year," notes the Verge. "The project is currently moving through the necessary privacy reviews, and it is set to be officially announced and submitted to the Federal Register in the coming months."
According to the Verge, government records indicate that authorities captured images of people in Texas and Arizona as they left work, picked up children from school and otherwise went about their daily lives--and that that repository of images will be part of the pilot program.
Once reported, the plan was quickly condemned by the ACLU, the Center for Media Justice and other civil liberties groups, which pointed to the inherent bias in facial recognition technology. There is evidence that the use of facial recognition technology disproportionately affects people of color.
“Black and brown people already are overpoliced and face disparate treatment in every stage of the criminal justice system,” the ACLU said in its statement. “Facial recognition likely will be disproportionately trained on these communities, further exacerbating bias under the cover of technology.”
But attorney Marc Lamber, of the law firm Fennemore Craig in Phoenix, pointed out that the tool itself is not necessarily to blame.
“It appears that much of the concern relates to the purported misuse of the technology to perpetuate an underlying and misguided policy directed at inappropriately surveilling certain demographics,” said Lamber. “I would respectfully submit that rather than attacking the provision of technology--which I would worry would stifle technological advancement--we should instead address and, if appropriate, attack the inappropriate policies themselves.”
Some are working to reduce the biases built into some facial recognition systems. For example, Microsoft recently touted improved skin and gender detection in its facial recognition search algorithms. Other countries--in particular, China--are investing in the technology, which means the debate isn’t limited to American firms.
Indeed, the debate will likely only heat up, and technology companies--and their customers--will need to watch the evolution of facial recognition systems and other biometric and artificial intelligence technologies. Microsoft and other tech companies have recently come under fire after contracting with government agencies for use of the technology. Microsoft, for one, has said it will more carefully consider contracts in this area and urged lawmakers to regulate the use of such artificial intelligence to prevent abuse.
And as criticism of the current uses of facial technology grows, positive uses must also be considered.
“To the extent that the technology is being used relative to immigrants or refugees," noted Lamber, "could it be used to reunite parents with their children based on facial recognition?”