- A New York school district is under fire for using facial recognition software that discriminates against Black people.
- A recent audit found that the system misidentifies Black women 16 times more often than white men.
- Bias in facial recognition software is a common problem, experts say.
David Malan / Getty Images
A school district in New York is under fire for using facial recognition technology that reportedly misidentifies Black students more than white students.
The New York Civil Liberties Union is suing the New York State Education Department for approving the use of a facial recognition system by the Lockport City Schools, a move it claims violates student data protection laws. A recent audit found that the system is the least accurate when identifying Black people.
“Face surveillance systems infringe on the privacy rights of students and raise the specter of schools sharing personally identifiable information with law enforcement or federal immigration authorities like ICE,” said Beth Haroules, senior staff attorney at the NYCLU said in a news release.
“Despite district claims that the system will not catalog students, the technology and nature of the data collected do not allow for students to remain anonymous,” continued Haroules. “The software is also inaccurate and is especially likely to misidentify women, young people, and people of color, disproportionately exposing them to the risks of misidentification and law enforcement.”
A Growing Controversy
The district installed 300 digital cameras and spent about $2.7 million on the security package, according to a report in the Buffalo News. And while Black women are 16 times more likely than white men to be misidentified using the system, the NYCLU said, the system is still expected to be accurate more than 99% of the time, with one false match in every 6,250 cases, according to the audit.
The controversy over facial recognition software is happening outside schools as well. The Madison, Wisconsin City Council recently banned city agencies from using facial recognition due to its unreliability.
“The technology has proven to be unreliable and faulty,” local politician Rebecca Kemble told the Wisconsin State Journal. “We also don’t want this technology to be used to further worsen the racial disparities that there already are in our criminal justice system.”
“In general, we see too many AI solutions now are cookie-cutter black box training and deployment with little thought…”
Bias in facial recognition software is a common problem, experts say.
“If the algorithm has not seen many examples of a particular ethnic group, it is less able to do fine-grain analysis to distinguish people among that ethnic group,” Marios Savvides, director of the CyLab Biometrics Center at Carnegie Mellon University, said in an email interview. “This is where smart AI algorithm design is done to deal with [imbalanced] data, and remove ethnic bias from becoming a problem in the algorithm design.”
Fighting Racist Tech
There are ways to design AI algorithms so they’re less affected by ethnic bias, Savvides said.
“In general, we see too many AI solutions now are cookie-cutter black box training and deployment with little thought, or real AI design, or signal processing going into them,” he added. “But this is exactly where the notion of explainable AI comes into place to try to put checks and balances so that the algorithms are built to try to avoid allowing such biases from building up in the algorithm.”
“The technology has proven to be unreliable and faulty.”
Eliminating bias requires biometrics technologies to be both accurate and consistent in how they capture and identify users, Stephen Ritter, CTO of biometrics company Mitek Systems, said in an email interview.
“For instance, if a system can regularly identify 90% of white males’ faces, but only 40% of Black female faces, it offers accuracy, but lacks consistency and perpetuates the problem of bias,” he said. “Alternatively, if a solution is able to consistently identify roughly 70% of all faces 80% of the time, it is slightly less accurate, but ultimately a better, more equitable tool.”
franckreporter / Getty Images
Paul Bischoff, privacy advocate and editor at technology research site Comparitech, argues that facial recognition in schools should be highly regulated. “Parents must give informed consent for their kid’s face to be put in a face recognition database,” he said in an email interview. “Access to that database needs to be limited to as few essential persons as possible.”
Students of color already face systemic disadvantages in the education system. The last thing they need is to be misidentified by wayward algorithms.