- New York lawmakers made a push to protect contact-tracing and biometric data information from law enforcement.
- Scandals involving Big Tech’s relationship with the government have caused many to grow concerned about privacy violations and the sharing of sensitive information.
- Racial bias in biometric data and law enforcement has caused a unique distrust among Black communities.
Getty Images / Sean Gallup
First it was facial recognition software. Now it’s contact tracing due to the pandemic. Law enforcement and immigration agents have taken advantage of a number of new tools to help them track and apprehend targets, but privacy experts warn they may be overstepping their bounds.
New York lawmakers in July unanimously passed legislation to keep contact tracing data away from police and federal authorities in a bid to protect the privacy of state residents. Gov. Cuomo remains reluctant to sign it into law, however. Contact tracing was adopted in May as a response to the coronavirus pandemic ravaging New York City, bringing the City That Never Sleeps to a veritable standstill.
In the months since, civil liberties advocates have become increasingly wary about law enforcement’s use of contact-tracing information, combined with facial recognition software. Privacy and the potential for abuse remain top of mind.
“This is of greatest concern in New York’s communities of color and migrant communities,” Nathan Sheard, associate director of community organizing at the Electronic Frontier Foundation, wrote in a piece urging Gov. Cuomo to sign the bill. “Concern over biased policing and harsh immigration enforcement has led many to worry about how sensitive information needed to fight the COVID-19 pandemic might be shared beyond agencies charged with protecting public health.”
The Excesses of Law and Tech
Contact tracing has been a well-established public health intervention tool for decades now. It allowed countries to successfully control the spread of Ebola during the 2014 outbreak. New contact tracing has taken a notably more technical turn with the introduction of digital tracing apps allowing for wider, more accurate scaling. It also brings distrust from public advocacy groups.
“Facial recognition technology displayed a propensity for racial bias through differing accuracy rates based on race.”
Biometric databases have allowed states and companies to reduce the spread of COVID-19; however, with this kind of sensitive information comes privacy concerns. When you have any kind of biometric database, anxieties regarding access and how it is protected become important issues. Government leaders reticent about protecting the privacy of the people with data collected during a public health crisis causes worry for many.
Privacy concerns have been at the forefront of the anti-tech movement as new innovations have become a regular part of the average person’s day-to-day life. One of the largest hurdles for contact-tracing technologies is public distrust. About 57% of people in a poll conducted by The Washington Post and the University of Maryland said they did not trust Google or Apple to keep the data anonymous. Technology companies’ history of allowing access to sensitive data has caused many to feel distrustful and skeptical. And they’re not wrong: Research published by the nonprofit accountability firm Tech Inquiry uncovered thousands of quietly signed contracts between the U.S. military and Big Tech.
Lost of Trust in the Police
According to a 2019 Pew Research poll, 56% of American adults trust law enforcement agencies to use facial recognition technology responsibly. The notable exception to the rule: the Black community. Less than half (43%) said they trust law enforcement to act responsibly with access to these technologies. It’s the only racial/ethnic demographic that does not express majority support.
Reluctance among Black communities is with due cause. Facial recognition software has a well-studied anti-Black bias, often misidentifying Black people or even mistaking them for gorillas. Emily Black, researcher and Ph.D. student in the Accountable Systems Lab at Carnegie Mellon, co-authored the white paper “Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains,” which addressed the efficacy of these technologies and some of their primary issues.
“Facial recognition technology displayed a propensity for racial bias through differing accuracy rates based on race,” she said during a phone interview with Lifewire. “These are socio-technical issues, meaning when you put a technical system and use it somewhere in society we have to understand where and how it’s used and how it interacts with systemic biases.”
With the rise of the Black Lives Matter movement, trust in law enforcement hit a record low. Only 48% of respondents in a 2020 Gallup poll said they had confidence in the police: Down five-percentage points from the previous year. Trust in their usage of facial recognition technology and other biometric data likely took a hit as well. Once again, racial differences abound. Black respondents showed only a 19% confidence, down 11 points, compared with their white counterparts (56%), with a decrease of four points year-over-year.
“This is a technology that could really change our societal landscape and how we feel about going outside and showing our face,” Black said. “The repercussions are so huge, but I don’t see what the huge gain is. We need to take time to figure out what are the situations when we can and should and how are we going to limit the usage to only those places and how we’re going to ensure that in those places it’s not producing more bias and it’s not compounding systemic biases in our societies.”