A growing number of gadgets are scanning your face.
Facial recognition is having a reckoning. Recent protests against racism and police brutality have shined a light on the surveillance tools available to law enforcement, and major tech companies are temporarily backing away from facial recognition and urging federal officials to step in and regulate.
Late last month, we learned of the first-known false arrest caused by a faulty facial recognition system, involving a Black man in Michigan identified by software that Detroit’s police chief later admitted had a 96 percent misidentification rate. And a policy group from the Association for Computing Machinery, a computing society with nearly 100,000 members, has called for the suspension of corporate and government use of the technology, citing concerns that its built-in biases could seriously endanger people.
There’s also pressure from Congress. Reps. Pramila Jayapal and Ayanna Pressley and Sens. Jeff Merkley and Ed Markey have proposed new legislation that would prohibit federal government use of facial recognition and encourage state and local governments to do the same. It’s one of the most sweeping proposals to limit the controversial biometric technology in the United States yet and has been hailed by racial justice and privacy advocates.
All of this follows a move by several major technology companies, including IBM, Amazon, and Microsoft, to pause or limit law enforcement’s access to their own facial recognition programs.
But amid the focus on government use of facial recognition, many companies are still integrating the technology into a wide range of consumer products. In June, Apple announced that it would be incorporating facial recognition into its HomeKit accessories and that its Face ID technology would be expanded to support logging into sites on Safari. In the midst of the Covid-19 pandemic, some firms have raced to put forward more contactless biometric tech, such as facial recognition-enabled access control.
“When we think about all of these seemingly innocuous ways that our images are being captured, we have to remember we do not have the laws to protect us,” Mutale Nkonde, a fellow at Harvard Law School’s Berkman Klein Center, told Recode. “And so those images could be used against you.”
The convenience that many find in consumer devices equipped with facial recognition features stands in stark contrast to the growing pressure to regulate and even ban the technology’s use by the government. That’s a sign that officials looking to effectively regulate the tech will have to take into account its range of uses, from facial recognition that unlocks a smartphone to the dystopian-sounding databases operated by law enforcement.
After all, when earlier this year Recode asked Sen. Jeff Merkley what inspired his push to regulate the technology, he pointed out how quickly the Photos app on his iPhone could identify members of his family. He was struck by how easily law enforcement could be able to track people with the technology, but also how powerful it had already become on his own device.
“You can hit that person, and every picture that you’ve taken with that person in it will show up,” Merkley said at the time. “I’m just going, ‘Wow.’”
Facial recognition is becoming more widespread in consumer devices
One of the most popular uses of facial recognition is verification, which is often used for logging into electronic devices. Rather than typing in a passcode, a front-facing camera on the phone snaps a picture of the user and then deploys facial recognition algorithms to confirm their identity. It’s a convenient (though not completely fool-proof) feature made popular when Apple launched Face ID with the iPhone X in 2017. Many other phone companies, including Samsung, LG, and Motorola, now provide facial recognition-based phone unlocking, and the technology is increasingly being used for easier log-ins on gaming consoles, laptops, and apps of all kinds.
But some consumer-focused applications of facial recognition go beyond verification, meaning they’re not just trying to identify their own users but also other people. One early example of this is Facebook’s facial recognition-based photo tagging, which scans through photos users post to the platform in order to suggest certain friends they can tag. Similar technology is also at work in apps like Google Photos and Apple Photos, both of which can automatically identify and tag subjects in a photo.
Apple is actually using the tagging feature in its Photos app to power the new facial recognition feature in HomeKit-enabled security cameras and smart doorbells. Faces that show up in the camera feed can be cross-referenced with the database from the Photos app, so that you’re notified when, for instance, a specific friend is knocking on your door. Google’s Nest cameras and other facial recognition-enabled security systems offer similar features. Face-based identification is also popping up in some smart TVs that can recognize which member of a household is watching and suggest tailored content.
Facial recognition is being used for identification and verification in a growing number of devices, but there will likely be possibilities for the technology that go beyond those two consumer applications. The company HireVue scans faces with artificial intelligence to evaluate job applicants. Some cars, like the Subaru Forester, use biometrics and cameras to track whether drivers are staying focused on the road, and several companies are exploring software that can sense emotion in a face, a feature that could be used to monitor drivers. But that can introduce new bias problems, too.
“In the context of self-driving cars, they want to see if the driver is tired. And the idea is if the driver is tired then the car will take over,” said Nkonde, who also runs the nonprofit AI for the People. “The problem is, we don’t [all] emote in the same way. “
The blurry line between facial recognition for home security and private surveillance for police
Facial recognition systems have three primary ingredients: a source image, a database, and an algorithm that’s trained to match faces across different images. These algorithms can vary widely in their accuracy and, as researchers like MIT’s Joy Buolamwini have documented, have been shown disproportionately inaccurate based on categories like gender and race. Still, facial recognition systems can differ in the size of their databases — that is, how many people a system can identify — as well as by the number of cameras or images they have access to.
Face ID is an example of a facial recognition technology used for identity verification. The system checks that a user’s face matches up with the face that’s trying to open the device. For Face ID, the details of an individual user’s face have been previously registered on the device. As such, the Apple algorithm is simply answering the question of whether or not the person is the phone’s user. It is not designed to identify a large number of people. Only one user’s biometric information is involved, and importantly, Apple does not send that biometric data to the cloud; it remains on the user’s device.
When more than one person is involved, facial recognition-based identity verification is more complicated. Take Facebook’s facial recognition-based photo tagging, for instance. It scans through a user’s photos to identify their friends, so it’s not just identifying the user, which is Face ID’s only job. It’s trying to spot any of the user’s friends that have opted in to the facial recognition-based tagging feature. Facebook says it doesn’t share peoples’ facial templates with anyone, but it took years for the company to give users control over the feature. Facebook failed to get users’ permission before implementing the photo-tagging feature back in 2010; this year, the company agreed to pay $550 billion to settle a lawsuit over violating users’ privacy. Facebook did not start asking users to opt in until 2019.
The question of consent becomes downright problematic in the context of security camera footage. Google Nest Cams, Apple HomeKit cameras, and other devices can let users create albums of familiar faces so they can get a notification when the camera’s facial recognition technology spots one of those people. According to Apple, the new HomeKit facial recognition feature lets users turn on notifications for when people tagged in their Photos app appear on camera. It also lets them set alerts for people who frequently come to their doorway, like a dog-walker, but not in their photo library app. Apple says the identification all happens locally on the devices.
The new Apple feature is similar to the familiar face detection feature that can be used with Google’s Nest doorbell and security cameras. But use of the feature, which is turned off by default, is somewhat murky. Google warns users that, depending on the laws where they live, they may need to get the consent of those they add notifications for, and some may not be able to use it at all. For instance, Google does not make the feature available in Illinois, where the state’s strict Biometric Information Privacy Act requires explicit permission for the collection of biometric data. (This law was at the center of the recent $550 billion Facebook settlement.) Google says its users’ face libraries are “stored in the cloud, where it is encrypted in transit and at rest, and faces aren’t shared beyond their structure.”
So Google- and Apple-powered security cameras are explicitly geared to consumers, and the databases used by their facial recognition algorithms are more or less limited.
The line between consumer tech like this and the potential for powerful police surveillance tools, however, becomes blurred with the security systems made by Ring. Ring, which is owned by Amazon, partners with police departments, and while Ring says its products do not currently use facial recognition technology, multiple reports indicate that the company sought to build facial recognition-based neighborhood watchlists. Ring has also distributed surveys to beta testers to see how they would feel about facial recognition features. The scope of these partnerships is worrisome enough that on Thursday Rep. Raja Krishnamoorthi, head of the House Oversight Committee, asked for more information about Ring’s potential facial recognition integrations, among other questions about the product’s long-standing problem with racism.
So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”
When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”
So it’s hardly a surprise that officials are most aggressively pushing to limit the use of facial recognition technology by law enforcement. Police departments and similar agencies not only have access to a tremendous amount of camera footage but also incredibly large face databases. In fact, the Georgetown Center for Privacy and Technology found in 2016 that more than half of Americans are in a facial recognition database, which can include mug shots or simply profile pictures taken at the DMV.
And recently, the scope of face databases available to police has grown even larger. The controversial startup Clearview AI claims to have mined the web for billions of photos posted online and on social media to create a massive facial recognition database, which it has made available to law enforcement agencies. According to Jake Laperruque, senior counsel at the Project on Government Oversight, this represents a frightening future for facial recognition technology.
“Its effects, when it’s in government’s hands, can be really severe,” Laperruque said. “It can be really severe if it doesn’t work, and you have false IDs that suddenly become a lead that become the basis of a whole case and could cause someone to get stopped or arrested.”
He added, “And it can be really severe if it does work well and if it’s being used to catalog lists of people who are at protests or a political rally.”
Regulating facial recognition will be piecemeal
The Facial Recognition and Biometric Technology Moratorium Act recently introduced on Capitol Hill is sweeping. It would prohibit federal use of not only facial recognition but also other types of biometric technologies, such as voice recognition and gait recognition, until Congress passes another law regulating the technology. The bill follows other proposals to limit government use of the technology, including one that would require a court-issued warrant to use facial recognition and another that would limit biometrics in federally assisted housing. Some local governments, like San Francisco, have also limited their own acquisition of the technology.
So what about facial recognition when it’s used on people’s personal devices or by private companies? Congress has discussed the use of commercial facial recognition and artificial intelligence more broadly. A bill called the Consumer Facial Recognition Privacy Act would require the explicit consent of companies collecting peoples’ biometric information, and the Algorithmic Transparency Act would require large companies to check their artificial intelligence, including facial recognition systems, for bias.
But the ubiquitous nature of facial recognition means that regulating the technology will inevitably require piecemeal legislation and attention to detail so that specific use cases don’t get overlooked. San Francisco, for example, had to amend its facial recognition ordinance after it accidentally made police-department-owned iPhones illegal. When Boston passed its recent facial recognition ordinance, it created an exclusion for facial recognition used for logging into personal devices like laptops and phones.
“The mechanisms to regulators are so different,” said Brian Hofer, who helped craft San Francisco’s facial recognition ban, adding that he’s now looking at creating local laws modeled after Illinois’ Biometric Privacy Act that focus more on consumers. “The laws are so different it would be probably impossible to write a clean, clearly understood bill regulating both consumer and government.”
A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should be regulated and controlled by a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.
Meanwhile, the growing ubiquity of facial recognition technology stands to normalize a form of surveillance. As Rochester Institute of Technology professor Evan Selinger argues, “As people adapt to routinely using any facial scanning system and it fades to the background as yet another unremarkable aspect of contemporary digitally mediated life, their desires and beliefs can become reengineered.”
And so, even if there is a ban on law enforcement using facial recognition and it’s effective to a degree, the technology is still becoming a part of everyday life. We’ll eventually have to deal with its consequences.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Support Vox’s explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.