By David McDaniels
June 14, 2021
At its very core, facial recognition is not objective. The technology is influenced by those creating the code. While algorithms remove some human influence, both the unconscious and conscious biases of the engineers make their way into artificial intelligence programs. Part of this lies in the barriers to entry into STEM fields that are disproportionately represented by white men. This greatly impacts ethical decision-making and representation. Issues are either intentionally ignored or overlooked by a demographic that remains mostly unaffected by negative externalities of the technology it is creating.
Further, facial recognition software is often developed with racially-biased statistics. Formative information is drawn from mugshot databases which are disproportionately filled with people of color and skewed by previous racially influenced arrest records. A study into the New York City Police Department database found that of the 42,000 “gang affiliate” mugshot photos they used to create their facial recognition program, 99% were Black and Latinx. Additionally, none of these 42,000 people have the ability to challenge their inclusion in the database. This dramatically imbalanced data leads to a lack of due process and perpetuates racial discrimination through technology.
“…modern AI misidentifies Black women at a rate of 35%.”
“While emerging technology has an important place in improving human rights interventions, we cannot forget the danger of it being used to expand discriminatory systems.”
David McDaniels is an Intern at ICAAD and is a junior at Georgetown University from Westchester, NY. He is studying Government and Sociology while serving as president of the campus ACLU chapter.