Algorithmic racism, reinforcement of prejudice and the use of AI: perspectives and challenges for digital criminal investigation

Views: 1070

Authors

DOI:

https://doi.org/10.5281/zenodo.11175558

Keywords:

Algorithmic Racism, AI Bias, Digital Criminal Investigation, Technological Challenges in Justice, Prejudices and Stigmas

Abstract

This article explores the duality of artificial intelligence (AI) in the field of criminal investigation, highlighting both its transformative potential and the significant challenges it presents, especially regarding the reinforcement of biases and the emergence of algorithmic racism. With the increasing adoption of AI systems, it becomes imperative to direct these technological advancements towards reinforcing democratic principles, critically examining the perspectives of police use of AI. This work aims to identify and analyze manifestations of algorithmic racism and biases reinforced by AI technologies in criminal investigation. By addressing these issues, it seeks to contribute to the debate on how to overcome these challenges, promoting an investigative practice that respects and protects individuals' fundamental rights while harnessing the benefits of technological innovation.

Downloads

Download data is not yet available.

Author Biographies

Anderson Andrade Bichara, Universidad Católica de Ávila - UCAV - Espanha

Mestre em Criminología Aplicada y Investigación Policial (UCAV-Espanha, 2023). Delegado da Secretaria Executiva da AMERIPOL. Delegado de Polícia Federal. Lattes CV: http://lattes.cnpq.br/3648943527323618

Agostinho Gomes Cascardo Jr., Universidade Aberta - UAb - Portugal

Cursando Doutorado em Sustentabilidade Social e Desenvolvimento (UAb-Portugal). Mestre em Ciência e Sistemas de Informação Geográfica (UNL-Portugal, 2020). Adido Policial do Brasil na Bolívia. Delegado de Polícia Federal. Lattes CV: https://lattes.cnpq.br/8536086575223316

Franco Perazzoni, Universidade Aberta - UAb - Portugal

Doutor em Sustentabilidade Social e Desenvolvimento (UAb-Portugal, 2021). Mestre em Ciência e Sistemas de Informação Geográfica (UNL-Portugal, 2012). Mestre em Alta Dirección en Seguridad Internacional (UCM3-Espanha, 2023). Lattes CV: http://lattes.cnpq.br/3027574581399092

References

Barroso, L. R. (2023). Curso de Direito Constitucional Contemporâneo (11th ed.). Saraiva Jur.

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code (1st ed.). Polity.

Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2021). Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research, 50(1), 3–44. https://doi.org/10.1177/0049124118782533

Bichara, A. de A., & Cascardo Jr, A. G. (2023, May). Inquérito Policial e Proibição de Viés de Confirmação. Jus.Com.Br. https://jus.com.br/artigos/104335/inquerito-policial-e-proibicao-de-vies-de-confirmacao-rebatendo-mitos-historico-doutrinarios-com-base-na-constituicao-e-em-tratados-internacionais

Brahan, J. W., Lam, K. P., Chan, H., & Leung, W. (1998). AICAMS: artificial intelligence crime analysis and management system. Knowledge-Based Systems, 11(5–6), 355–361. https://doi.org/10.1016/S0950-7051(98)00064-1

Constituição da República Federativa do Brasil, (1988) (testimony of Brasil). https://www.planalto.gov.br/ccivil_03/constituicao/constituicao.htm

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards Transparency by Design for Artificial Intelligence. Science and Engineering Ethics, 26(6), 3333–3361. https://doi.org/10.1007/s11948-020-00276-4

Hany Shaban Elanany, A. E. M. E. (2023). The Use of Artificial Intelligence in Investigating, Combating and Predicting Various Crimes through Understanding the Psychology of Perpetrators. Journal for ReAttach Therapy and Developmental Diversities, 6(10), 303–316. https://jrtdd.com/index.php/journal/article/view/1110

Johnson, G. M. (2021). Algorithmic bias: on the implicit biases of social technology. Synthese, 198(10), 9941–9961. https://doi.org/10.1007/s11229-020-02696-y

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Review, 9(2). https://doi.org/10.14763/2020.2.1469

Milgram, S. (1974). Obedience to Authority: An Experimental View (1st ed.). Harper & Row.

O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (1st ed.). Crown.

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Government Information Quarterly, 39(4), 101679. https://doi.org/10.1016/j.giq.2022.101679

Published

2024-06-04

How to Cite

Bichara, A. A., Cascardo Jr., A. G., & Perazzoni, F. (2024). Algorithmic racism, reinforcement of prejudice and the use of AI: perspectives and challenges for digital criminal investigation. Boletim IBCCRIM, 32(379), 23–26. https://doi.org/10.5281/zenodo.11175558

Issue

Section

Dossiê: Desafios Atuais da Investigação Criminal para a Polícia

Metrics