By David McDaniels

June 14, 2021

Facial recognition is far from the measure of effective justice it is presented to be by police departments and large tech developers. It is regarded as the future of policing and a step away from racial profiling. The New York Police Department writes, “facial recognition technology is a valuable tool in solving crimes and increasing public safety.” In reality, however, its features are over-applied and lead to racially biased security. It is a tool that ‘sees’ some and ‘ignores’ others. Its implementation by police forces targets and controls groups they deem as threats. These are not solutions, but the same problems presented in new and unique ways. 

At its very core, facial recognition is not objective. The technology is influenced by those creating the code. While algorithms remove some human influence, both the unconscious and conscious biases of the engineers make their way into artificial intelligence programs. Part of this lies in the barriers to entry into STEM fields that are disproportionately represented by white men. This greatly impacts ethical decision-making and representation. Issues are either intentionally ignored or overlooked by a demographic that remains mostly unaffected by negative externalities of the technology it is creating.  

Further, facial recognition software is often developed with racially-biased statistics. Formative information is drawn from mugshot databases which are disproportionately filled with people of color and skewed by previous racially influenced arrest records. A study into the New York City Police Department database found that of the 42,000 “gang affiliate” mugshot photos they used to create their facial recognition program, 99% were Black and Latinx. Additionally, none of these 42,000 people have the ability to challenge their inclusion in the database. This dramatically imbalanced data leads to a lack of due process and perpetuates racial discrimination through technology.

Further, facial recognition is not nearly as accurate as advertised. Instead of a system to fairly identify criminals, the technology often only accurately recognizes white subjects. This discrepancy particularly impacts women of color. In a study completed by Joy Buolamwini and Timnit Gebru, it was found that modern AI misidentifies Black women at a rate of 35%

“…modern AI misidentifies Black women at a rate of 35%.”

Even perfectly accurate facial recognition technology has the potential to be abused and infringe upon human rights. This is due to unjust applications. Police departments have weaponized this technology to increase surveillance and track down people of interest. In addition to fueling racial profiling, the technology has been used to crack down on protests. After the Baltimore protests over the police killing of Freddie Gray, the Baltimore Police Department began to use facial recognition technology to analyze surveillance videos and identify protesters. They were able to match the photos to government databases and target individuals. This process raised questions from human and civil rights attorneys across the country, as the details and process were intentionally unclear. There was no standard for who was being investigated and no way to prevent overreach and intentional targeting of protestors of color. It is a system built to discriminate.

“While emerging technology has an important place in improving human rights interventions, we cannot forget the danger of it being used to expand discriminatory systems.”

These racial disparities are why human rights groups are calling for the banning of this technology in policing. The Algorithmic Justice League writes, “facial surveillance threatens rights including privacy, freedom of expression, freedom of association and due process.” Additionally, NGOs like the Electronic Frontier Foundation have participated in the process of creating a face recognition multistakeholder process in order to create more transparency and accountability within the field. 
A swift and substantial response to the use of facial recognition technology by police departments is imperative. Its implementation must be paused until it no longer perpetuates racism. Leaders like Joy Buolamwini of the MIT Media Lab are fighting for this change. There must be real, diverse representation in the creation of this artificial intelligence, its rollout, and in creating systems of accountability. These changes are starting to happen. The King County Council in Washington recently announced a ban on facial recognition technology citing its racial disparities and biases. This is a crucial victory for the movement. As Buolamwini emphasizes, there cannot be justice in technology until it acknowledges its racial bias. While emerging technology has an important place in improving human rights interventions, we cannot forget the danger of it being used to expand discriminatory systems. Simply, there can be no just system built on racism.

 

David McDaniels is an Intern at ICAAD and is a junior at Georgetown University from Westchester, NY. He is studying Government and Sociology while serving as president of the campus ACLU chapter.

Download the Policy Brief

Sign up to download the policy brief and receive updates on advocacy for climate-displaced persons.

Thanks for subscribing!
 
CLICK TO DOWNLOAD THE POLICY BRIEF