Scrutinizing “inefficiencies”: delving into the crossroads of artificial intelligence, content moderation, and algorithmic discrimination from a decolonial perspective

Authors

Abstract

Content moderation on social media platforms has been performed mainly by artificial intelligence. Devised, fed, and trained by humans, artificial intelligence used for content moderation purposes is subject to reproducing pre-existing biases from the physical world into the digital realm. This paper aims to question alleged "shortcomings" of artificial intelligence in content moderation, namely concerning the concrete risks of algorithmic discrimination against marginalized groups. Often acknowledged by the platforms themselves as issues related to machine "inefficiency", my argument posits that recurring episodes of algorithmic discrimination unveil, in reality, a programmed bias or selectivity.

Author Biography

Amanda Chami, Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio)

Mestranda em Teoria do Estado e Direito Constitucional na PUC-Rio. Pesquisadora no grupo de Reescrita de Sentenças, organizado pela Profa. Márcia Nina Bernardes (PUC-Rio). Graduada em Direito pela PUC-Rio. Advogada no escritório de advocacia Terra, Tavares, Ferrari, Schenk, Elias Rosa.

Published

2024-06-22

How to Cite

Chami, A. (2024). Scrutinizing “inefficiencies”: delving into the crossroads of artificial intelligence, content moderation, and algorithmic discrimination from a decolonial perspective. Revista Brasileira De Direito Civil, 33(1), 281–298. Retrieved from https://rbdcivil.emnuvens.com.br/rbdc/article/view/1017