Abstract
As artificial intelligence becomes increasingly central to online content moderation, the question of what “human-centered” design truly entails has gained renewed urgency. This paper examines the linguistic and cultural dimensions of algorithmic decision-making, focusing on how AI moderation systems reproduce structural inequalities between high-resource and low-resource languages. While major technology firms frame human oversight as a safeguard against bias, empirical evidence suggests that moderation errors disproportionately affect speakers of non-dominant languages, where automated translation and context detection remain unreliable. The study applies Critical Discourse Analysis (CDA) to a corpus of policy statements, transparency reports, and Oversight Board decisions published by Meta between 2020 and 2025. Through qualitative coding, it investigates how linguistic difference is represented, problematized, or rendered invisible within the discourse of AI ethics and “responsible innovation.” The analysis highlights the gap between the rhetoric of inclusivity and the operational realities of language-based AI systems, where “human-centered” design often centers specific linguistic and cultural assumptions. By revealing how linguistic hierarchies persist within ostensibly neutral technological infrastructures, this paper contributes to the 2026 conference’s special focus on Human-Centered AI Transformations. It argues that genuine human-centered AI requires not only human oversight but also linguistic competence, cultural contextualization, and epistemic plurality, conditions under which digital technologies can more equitably serve the diverse societies they claim to protect.
Presenters
Réka Brigitta SzaniszlóAssistant Professor, Faculty of Law and Political Sciences, International and Regional Studies Institute, University of Szeged, Hungary
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2026 Special Focus—Human-Centered AI Transformations
KEYWORDS
Content moderation, Linguistic bias, Critical discourse analysis, Algorithmic justice
