The Mathematics of Injustice: Why Facial Recognition's Algorithmic Bias Makes It Fundamentally Unfit for Democratic Society
Introduction
One can argue against AI-based facial recognition technology (FRT) from multiple perspectives. Warren and Brandeis (1890) established the foundational right of privacy or ‘right to be let alone’, whilst Moore (2008) extends this to encompass control over personal information and bodily autonomy. FRT violates principles of meaningful consent—as Millum and Bromwich (2018) argue, valid consent requires both awareness and the ability to refuse, yet pedestrians cannot escape surveillance cameras nor extract their biometric data once it enters AI systems (Winograd, 2023) . Macnish (2018) warns of chilling effects on democratic participation, where pervasive surveillance deters protest and free expression. Also, facial recognition reduces ‘face-to-face’ encounters to mere mathematical computation, reducing human dignity and our moral duty to treat individuals as ends in themselves. However compelling these arguments, I shall adopt the principle of depth over breadth, concentrating specifically on how algorithmic bias and disproportionate harm make facial recognition fundamentally incompatible with democratic values of equality. This focused approach reveals not merely technical shortcomings, but systematic injustices embedded within the technology itself.
Argument
The deployment of FRT ethically fails us as a society, systematically amplifying statistical bias into lived discrimination. The evidence clear: NIST's Face Recognition Vendor Test reveals that algorithms misidentify women and people of African or East Asian heritage at significantly higher rates than white males (Grother et al., 2019). In practice, this would mean a 1% error rate, for example, in a database of one million faces, translates to 10,000 false positives i.e. 10,000 instances where innocent individuals are flagged for scrutiny, investigation, or worse. Furthermore, as Hall et al., (2022) demonstrate in their systematic study of bias amplification, machine learning models don't merely reflect existing inequalities; they compound them through feedback loops that reinforce discriminatory patterns over time. This amplification effect is particularly dangerous because, as (Mehrabi et al., 2021) argue in their comprehensive survey on bias and fairness, attempts to define algorithmic 'fairness' encounter fundamental trade-offs; one cannot simultaneously optimise for equal false positive rates across groups and equal positive predictive values. The very metrics we use to measure fairness are mutually incompatible, revealing that bias is not a solvable problem but an inherent characteristic of these systems.
The European Data Protection Board's proportionality standards demand that surveillance measures be necessary and proportionate to their aims. Facial recognition fails spectacularly on all counts. It is not necessary as traditional policing methods or even using surveillance of individuals under legal jurisdiction achieve the same ends without mass biometric surveillance. It is also not proportionate as the societal harm of deploying such biased automated systems far outweighs any suggested security benefits. Higher error rates for marginalised communities mean it fails precisely where justice systems most need accuracy.
Evidence
These concerns are not mere hypothetical possibilities but have materialised tragically in real-world applications. In Detroit for example, two Black men were wrongfully arrested after being misidentified by FRT operating on Microsoft-powered systems (Fergus, 2024). Microsoft's provision of AI services to the Israeli military during the 2023-24 Gaza conflict is far worse. According to Associated Press investigations, AI translation errors nearly led to deadly mistakes: Arabic slang for ‘payment’ was mistranslated as ‘rocket grip,’ and high-school students Excel list (titled ‘finals’ in Arabic, referring to exams) were confused with militant lists (Biesecker Michaek & Mednick Sam, 2025) Only human intervention by a senior Arabic-speaking officer aware of the context and disparities prevented innocent civilians from being targeted based on algorithmic errors. These AI systems by Microsoft, originally built for commercial use in English, are known to make up text that was never spoken including racial commentary and violent rhetoric. It is difficult to evaluate the error rates due to a lack of transparent and independent investigation of the deaths from Israel’s use of Microsoft’s AI algorithms – however cases have been reported and documented. In a flawed AI-aided strike, three Palestinian young girls and their grandmother were killed and in another a Lebanese family with their children (Biesecker Michael et al., 2025)
Conclusion
The question is not whether we can make facial recognition less biased, but whether any level of algorithmic discrimination is acceptable in any democratic society where all human life holds equal value whatever the skin colour or gender may be - the answer must be an unequivocal no.
References
Biesecker Michaek, & Mednick Sam. (2025, February 288). As Israel uses U.S.-made AI models in war, concerns arise about tech’s role in who lives and who dies | The Associated Press. AP. https://www.ap.org/news-highlights/best-of-the-week/first-winner/2025/as-israel-uses-u-s-made-ai-models-in-war-concerns-arise-about-techs-role-in-who-lives-and-who-dies/
Biesecker Michael, Mednick Sam, & Burke Garance. (2025, February 18). How US tech giants’ AI is changing the face of warfare in Gaza and Lebanon | AP News. AP. https://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108
Fergus, R. (2024). Biased Technology: The Automated Discrimination of Facial Recognition | ACLU of Minnesota. https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition
Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test part 3: https://doi.org/10.6028/NIST.IR.8280
Hall, M., van der Maaten, L., Gustafson, L., & Adcock, A. (2022). A Systematic Study of Bias Amplification. https://arxiv.org/abs/2201.11706
Macnish, K. (2018). Key Ethical Issues in Surveillance. In The ethics of surveillance : an introduction. Routledge.
Mehrabi, N., Morstatter F, Lerman K, Galstyan A, & Saxena N. (2021). A Survey on Bias and Fairness in Machine Learning. https://doi.org/10.48550/arXiv.1908.09635
Millum, J. (2018). Understanding, Communication, and Consent. Ergo, an Open Access Journal of Philosophy. https://doi.org/https://doi.org/10.3998/ergo.12405314.0005.002
Moore, A. (2008). Defining privacy. Journal of Social Philosophy, 39(3), 411–428. https://doi.org/10.1111/J.1467-9833.2008.00433.X;JOURNAL:JOURNAL:14679833;WGROUP:STRING:PUBLICATION
Warren, S. D. and B. L. D. (1890). The Right to Privacy by Samuel D. Warren and Louis Dembitz Brandeis | Project Gutenberg. https://www.gutenberg.org/ebooks/37368
Winograd, A. (2023). LOOSE-LIPPED LARGE LANGUAGE MODELS SPILL YOUR SECRETS: THE PRIVACY IMPLICATIONS OF LARGE LANGUAGE MODELS - University of Leeds. https://leeds.primo.exlibrisgroup.com/discovery/fulldisplay?context=PC&vid=44LEE_INST:VU1&search_scope=My_Inst_CI_not_ebsco&tab=AlmostEverything&docid=cdi_gale_infotracmisc_A766746931