Face recognition in the US is about to meet one of its biggest tests

Just four years ago, the movement to ban police departments from using face recognition in the US was riding high. By the end of 2020, around 18 cities had enacted laws forbidding the police from adopting the technology. US lawmakers proposed a pause on the federal government’s use of the tech. 

In the years since, that effort has slowed to a halt. Five municipal bans on police and government use passed in 2021, but none in 2022 or in 2023 so far, according to a database from the digital rights group Fight for the Future. Some local bans have even been partially repealed, and today, few seriously believe that a federal ban on police use of face recognition could pass in the foreseeable future. In the meantime, without legal limits on its use, the technology has only grown more ingrained in people’s day-to-day lives.

However, in Massachusetts there is hope for those who want to restrict police access to face recognition. The state’s lawmakers are currently thrashing out a bipartisan state bill that seeks to limit police use of the technology. Although it’s not a full ban, it would mean that only state police could use it, not all law enforcement agencies.

The bill, which could come to a vote imminently, may represent an unsatisfying compromise, both to police who want more freedom to use the technology and to activists who want it completely banned. But it represents a vital test of the prevailing mood around police use of these controversial tools. 

That’s because when it comes to regulating face recognition, few states are as important as Massachusetts. It has more municipal bans on the technology than any other state, and it’s an epicenter for civil liberty advocates, academics, and tech companies. For a movement in need of a breakthrough, a lot rides on whether this law gets passed. 

Right now in the US, regulations on police use of face recognition are trapped in political gridlock. If a leader like Massachusetts can pass its bill, that could usher in a new age of compromise. It would be one of the strictest pieces of statewide legislation in the country and could set the standard for how face recognition is regulated elsewhere. 

On the other hand, if a vote is delayed or fails, it would be yet another sign that the movement is waning as the country moves on to other policy issues.

A history of advocacy

Privacy advocates and public interest groups have long had concerns about the invasiveness of face recognition, which is pivotal to a growing suite of high-tech police surveillance tools. Many of those fears revolve around privacy: live video-based face recognition is seen as riskier than retroactive photo-based recognition because it can track people in real time. 

Those worries reached a fever pitch in 2018 with the arrival of a bombshell: a privacy-shredding new product from a small company called Clearview AI.

Clearview AI’s powerful technology dramatically changed privacy and policing in the US. The company quietly gave free trials of the product to hundreds of law enforcement agencies across the country. Suddenly, police officers looking to identify someone could quickly comb through vastly more images than they’d ever had access to before—billions of public photos available on the internet.

The very same year, evidence started to mount that the accuracy of face recognition tools varied by race and gender. A groundbreaking study out of MIT by Joy Buolamwini and Timnit Gebru, called Gender Shades, showed that the technology is far less accurate at identifying people of color and women than white men. 

The US government corroborated the results in a 2019 study by the National Institute of Science and Technology, which found that many commercial face recognition algorithms were 10 to 100 times more inaccurate in identifying Asian and Black faces than white ones. 

Politicians started to wake up to the risks. In May 2019, San Francisco became the first city in the US to ban police use of face recognition. One month later, the ACLU of Massachusetts announced a groundbreaking campaign called “Press Pause,” which called for a temporary ban on the technology’s use by police in cities across the state. Somerville, Massachusetts, became the second city in the United States to ban it. 

Over the next year, six more Massachusetts cities, including Boston, Cambridge, and Springfield, approved bans on police and government use of face recognition. Some cities even did so preemptively; in Boston, for example, police say they were not using the technology when it was banned. Major tech companies, including Amazon, Microsoft, and IBM pulled the technology from their shelves, and civil liberties advocates were pushing for a nationwide ban on its police use.

“Everyone who lives in Massachusetts deserves these protections; it’s time for the Massachusetts legislature to press pause on this technology by passing a statewide moratorium on government use of face surveillance,” Carol Rose, the executive director of the ACLU’s Massachusetts chapter, said in a statement after Boston passed its ban in June 2020. 

That moratorium would never happen. 

Is your face private? 

At first, momentum was on the side of those who supported a statewide ban. The murder of George Floyd in Minneapolis in May 2020 had sent shock waves through the country and reinvigorated public outcry about abuses in the policing system. In the search for something tangible to fix, activists both locally and nationwide alighted on face recognition. 

At the beginning of December 2020, the Massachusetts legislature passed a bill that would have dramatically restricted police agencies in the state from using face recognition, but Governor Charlie Baker refused to sign it, saying it was too limiting for police. He said he would never sign a ban into law. 

In response, the legislature passed another, more toned-down bill several weeks later. It was still a landmark achievement, restricting most government agencies in the state from using the technology. It also created a commission that would be tasked with investigating further laws specific to face recognition. The commission included representatives from the state police, the Boston police, the Massachusetts Chiefs of Police Association, the ACLU of Massachusetts, several academic experts, the Massachusetts Department of Public Safety, and various lawmakers from both political parties, among others. 

Law enforcement agencies in the state were now permitted access only to face recognition systems owned and operated by the Registry of Motor Vehicles (RMV), the state police, or the FBI. As a result, the universe of photos that police could query was much more limited than what was available through a system like Clearview, which gives users access to all public photos on the internet. 

To hunt for someone’s image, police had to submit a written request and obtain a court order. That’s a lower bar than a warrant, but previously, they’d just been able to ask by emailing over a photo to search for suspects in misdemeanor and felony offenses including fraud, burglary, and identity theft. 

At the time, critics felt the bill was lacking. “They passed some initial regulations that don’t go nearly far enough but were an improvement over the status quo, which was nothing,” says Kade Crockford of the ACLU of Massachusetts, a commission member.

Still, the impetus toward a national ban was building. Just as the commission began meeting in June 2021, Senator Ed Markey of Massachusetts and seven other members of Congress introduced a bill to ban federal government agencies, including law enforcement, from using face recognition technology. All these legislators were left-leaning, but at the time, stricter regulation had bipartisan support.

The Massachusetts commission met regularly for a year, according to its website, with a mandate to draft recommendations for the state legislature about further legal limits on face recognition.

As debate ensued, police groups argued that the technology was essential for modern policing. 

“The sort of constant rhetoric of many of the appointees who were from law enforcement was that they did not want to tie the hands of law enforcement if the X, Y, Z worst situation happened—a terrorist or other extremely violent activity,” said Jamie Eldridge, a Massachusetts state senator who cochaired the commission, in an interview with MIT Technology Review. 

Despite that lobbying, in March 2022 the commission voted to issue a strict set of recommendations for the legal use of face recognition. It suggested that only the state police be allowed to use the RMV database for face matching during a felony investigation, and only with a warrant. The state police would also be able to request that the FBI run a face recognition search.

Of the commission’s 21 members, 15 approved the recommendations, including Crockford. Two abstained, and four dissented. Most of the police members of the commission voted no. 

One of them, Norwood Police Chief William Brooks, told MIT Technology Review there were three major things he disagreed with in the recommendations: requiring a warrant, restricting use of the technology to felonies only, and preventing police from accessing face recognition databases outside those of the RMV and the FBI. 

Brooks says the warrant requirement “makes no sense” and “would protect no one,” given that the law already requires a court order to use face recognition technology. 

“A search warrant is obtained when the police want to search in a place where a person has an expectation of privacy. We’re not talking about that here. We’re just talking about what their face looks like,” he says.

Other police groups and officers serving on the commission, including the Massachusetts public safety office, the Boston Police Patrolmen’s Association, and the Gloucester Police Department, have not responded to our multiple requests for comment. 

An unsatisfying compromise 

After years of discussion, debate, and compromise, in July 2022 the Massachusetts commission’s recommendations were codified into an amendment that has already been passed in the state house of representatives and may come to a vote via a bill in the state senate any day. 

The bill allows image matching, which looks to retroactively identify a face by finding it in a database of images, in certain cases. But it bans two other types of face recognition: face surveillance, which seeks to identify a face in videos and moving images, and emotion recognition, which tries to assign emotions to different facial expressions. 

This more subtle approach is reminiscent of the path that EU lawmakers have taken when evaluating the use of AI in public applications. That system uses risk tiers; the higher the risks associated with a particular technology, the stricter the regulation. Under the proposed AI Act in Europe, for example, live face recognition on video surveillance systems in public spaces would be regulated more harshly than more limited, non-real-time applications, such as an image search for in an investigation of a missing child. 

Eldridge says he expects resistance from prosecutors and law enforcement groups, though he is “cautiously optimistic” that the bill will pass. He also says that many tech companies lobbied during the commission hearings, claiming that the technology is accurate and unbiased, and warning of an industry slowdown if the restrictions pass. Hoan Ton-That, CEO of Clearview, told the commission in his written testimony that “Clearview AI’s bias-free algorithm can accurately find any face out of over 3 billion images it has collected from the public internet.”

Crockford and Eldridge say they are hopeful the bill will be called to a vote in this session, which lasts until July 2024, but so far, no such vote has been scheduled. In Massachusetts, like everywhere else, other priorities like economic and education bills have been getting more attention. 

Nevertheless, the bill has been influential already. Earlier this month, the Montana state legislature passed a law that echoes many of the Massachusetts requirements. Montana will outlaw police use of face recognition on videos and moving images, and require a warrant for face matching. 

The real costs of compromise 

Not everyone is thrilled with the Massachusetts standard. Police groups remain opposed to the bill. Some activists don’t think such regulations are enough. Meanwhile, the sweeping face recognition laws that some anticipated on a national scale in 2020 have not been passed. 

So what happened between 2020 and 2023? During the three years that Massachusetts spent debating, lobbying, and drafting, the national debate moved from police reform to rising crime, triggering political whiplash. As the pendulum of public opinion swung, face recognition became a bargaining chip between policymakers, police, tech companies, and advocates. Perhaps importantly, we also got accustomed to face recognition technology in our lives and public spaces.  

Law enforcement groups nationally are becoming increasingly vocal about the value of face recognition to their work. For example, in Austin, Texas, which has banned the technology, Police Chief Joseph Chacon wishes he had access to it in order to make up for staffing shortages, he told MIT Technology Review in an interview. 

Some activists, including Caitlin Seeley George, director of campaigns and operations at Fight for the Future, say that police groups across the country have used similar arguments in an effort to limit face recognition bans.  

“This narrative about [an] increase in crime that was used to fight the defund movement has also been used to fight efforts to take away technologies that police argue they can use to address their alleged increasing crime stats,” she says. 

Nationally, face recognition bans in certain contexts, and even federal regulation, might be on the table again as lawmakers grapple with recent advances in AI and the attendant public frenzy about the technology. In March, Senator Markey and colleagues reintroduced the proposal to limit face recognition at a federal level. 

But some advocacy groups still disagree with any amount of political compromise, such as the concessions in the Montana and Massachusetts bills.  

“We think that advocating for and supporting these regulatory bills really drains any opportunity to move forward in the future with actual bans,” says Seeley George. “Again, we’ve seen that regulations don’t stop a lot of use cases and don’t do enough to limit the use cases where police are still using this technology.” 

Crockford wishes a ban had been politically feasible: “Obviously the ACLU’s preference is that this technology is banned entirely, but we get it … We think that this is a very, very, very compromised common-sense set of regulations.”

Meanwhile, some experts think that some activists’ “ban or nothing” approach is at least partly responsible for the current lack of regulations restricting face recognition. Andrew Guthrie Ferguson, a law professor at American University Washington College of Law who specializes in policing and tech, says outright bans face significant opposition, and that’s allowed continued growth of the technology without any guardrails or limits.

Face recognition abolitionists fear that any regulation of the technology will legitimize it, but the inability to find agreement on first principles has meant regulation that might actually do some good has languished. 

Yet throughout all this debate, facial recognition technology has only grown more ubiquitous and more accurate.

In an email to MIT Technology Review, Ferguson said, “In pushing for the gold standard of a ban against the political forces aligned to give police more power, the inability to compromise to some regulation has a real cost.”

Uber’s facial recognition is locking Indian drivers out of their accounts 

One early evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad in his midsize sedan. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.

It didn’t take long for Srikanth to come up with a theory as to why. He had just returned from visiting the Hindu Tirupati temple, 350 miles away, where he had shaved his head and prayed for a prosperous life. 

The Uber app prompted Srikanth to try again, so he waited a few minutes and took another picture. Rejected again. 

“I was worried about bookings. We have daily targets where if we complete a certain number of bookings, we get incentives,” Srikanth says. “I was anxious to log in and start driving, and not waste any time.” So he tried once more. This time he used a second phone to pull up an image of himself from before he visited the temple. When he took a picture of it, Uber informed him that his account had been blocked.

Srikanth is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts as a result of problems with their selfie. Many suspected that a change in their appearance, such as facial hair, a shaved head, or a haircut, was to blame. Another quarter of them believe it was due to low lighting. 

Srikanth thinks the split-second decision to take a picture of another phone cost him his livelihood: he went from earning over $500 a month to nothing. He spent months afterward trying to get his account reinstated, to no avail. Eventually he had to move back to his hometown, where he works a few different jobs and makes barely 10% of what he used to.

Srikanth is far from the only worker in India who must interact with facial recognition technology. In addition to the country’s 600,000 Uber drivers, many others work for the homegrown ride-sharing platform Ola and for startups such as Swiggy, Zomato, and Urban Company. All ask their platform workers to upload selfies for logins or verifications. 

Before and after comparison photos of Srikanth after a change to his haircut and facial hair
Niradi Srikanth, before and after he changed his facial hair and hair style.
COURTESY PHOTOS

In other markets, gig workers have fought back against facial recognition. In the UK, for example, at least 35 Uber drivers claimed last year that their accounts were wrongly terminated. The Independent Workers’ Union of Great Britain has blamed a “racist algorithm.” Uber has faced at least two lawsuits in the UK because of the software

Some countries and regions have moved to provide better protections for gig workers. The EU proposed a directive last year to improve working conditions and provide algorithmic transparency. And in September 2021, California court struck down Proposition 22, a ballot initiative that excluded gig workers from employee benefits under state law. These regulations recognize that algorithmic systems can “negatively impact the rights of workers,” says Divij Joshi, a lawyer and a PhD candidate at University College London. But India currently has few legal protections in place for gig workers, Joshi says: “These same transparency efforts are not being seen in India from a policy or regulatory lens.”

If problems persist—and protections remain limited—they could have an outsize effect, and not just on work. “Labor platforms in India are starting to become a key interface between the worker, the market, and the government—they enable loans for cars or even credit for larger household expenses,” says Aditi Surie, a senior researcher at the Indian Institute for Human Settlements, who has done research on gig work in India. In a country where such work can catapult someone from precarity to a middle-class existence (especially when estimates suggest that the majority of people worldwide who fell into poverty during the pandemic live in India), getting blocked from or kicked off a platform can have devastating consequences.

Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.

But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says. 

More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad. 

Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues. 

Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check. 

Adnan Taqi holds up his phone in the driver’s seat of his car. Variations in lighting and facial hair have likely caused him to lose access to the app.
SELVAPRAKASH LAKSHMANAN

“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.

But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four. 

It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.” 

Microsoft declined to comment in response to questions sent by MIT Technology Review.

The problems don’t end with the algorithm’s decision. Drivers say the grievance redress mechanism that Uber follows is tedious, time-consuming, frustrating, and mostly unhelpful. They say they sometimes spend weeks trying to get their issues resolved. “We have to keep calling their help line incessantly before they unlock our accounts, constantly telling us that the server is down,” said Taqi, with a tone of frustration—but mostly a sense of defeat—in his voice. “It’s like their server is always down.”

Uber did not respond to a request for comment. 

Srikanth visited the Uber center at least three times a week for three months before he gave up and went back home. He stood in queues with some 80 to 100 other drivers. “The Uber people kept telling me my ID is permanently blocked and they can’t really do much,” he recalled. “They said I could go to the Bangalore [office] or just deploy another driver to drive my car.” 

Elizabeth Anne Watkins, an organizational sociologist from Princeton University who has extensively studied the impact of facial recognition on Uber drivers in the US, would likely find this pattern familiar. “Prone to malfunction in variable conditions, the system places a heavy burden on workers who are left with little organizational support when facial recognition fails,” Hawkins, who is now a research scientist at Intel Labs, wrote in a 2020 paper. “Further, accountability for identity verification is shifted to the workers, who bear the consequences for systemic failures.”

Samantha Dalal, who studies how workers understand algorithmic systems, says there could be more transparency about how the AI made a decision. “Including some explanation that goes beyond ‘You are deactivated’” would help, says Dalal, a doctoral candidate at the University of Colorado Boulder. “Such capabilities exist.”

Absent any insight into what the mercurial, non-human boss wants, gig workers attempt a lot of trial and error while interacting with the apps, Dalal says. In the case of Srikanth, she explains that since he “couldn’t go back in time to before he had shaved his head, he did the next best thing and showed a picture of himself.”

It’s been over a year since Srikanth was locked out of Uber. Despite everything, he’s not hostile toward the company. He simply wants his old life back—one where he was able to make a life for himself in Hyderabad and build up some wealth. He can’t imagine returning to the city unless he can get behind the wheel again. 

Varsha Bansal is a freelance journalist based in Bangalore. Reporting for this story was supported by Pulitzer Center’s AI Accountability Network.