Please use this identifier to cite or link to this item: https://hdl.handle.net/10953/1911
Title: MarIA and BETO are sexist: evaluating gender bias in large language models for Spanish
Authors: Garrido-Muñoz, Ismael
Martínez-Santiago, Fernando
Montejo-Ráez, Arturo
Abstract: The study of bias in language models is a growing area of work, however, both research and resources are focused on English. In this paper, we make a first approach focusing on gender bias in some freely available Spanish language models trained using popular deep neural networks, like BERT or RoBERTa. Some of these models are known for achieving state-of-the-art results on downstream tasks. These promising results have promoted such models’ integration in many real-world applications and production environments, which could be detrimental to people affected for those systems. This work proposes an evaluation framework to identify gender bias in masked language models, with explainability in mind to ease the interpretation of the evaluation results. We have evaluated 20 different models for Spanish, including some of the most popular pretrained ones in the research community. Our findings state that varying levels of gender bias are present across these models.This approach compares the adjectives proposed by the model for a set of templates. We classify the given adjectives into understandable categories and compute two new metrics from model predictions, one based on the internal state (probability) and the other one on the external state (rank). Those metrics are used to reveal biased models according to the given categories and quantify the degree of bias of the models under study.
Keywords: BERT
Bias evaluation
Deep learning
Gender bias
Language model
RoBERTa
Issue Date: 23-Jul-2023
metadata.dc.description.sponsorship: Funding for open access publishing: Universidad de Jaén/CBUA. This work has been partially supported by WeLee project (1380939, FEDER Andalucía 2014-2020) funded by the Andalusian Regional Government, and projects CONSENSO (PID2021-122263OB-C21), MODERATES (TED2021-130145B-I00), SocialTOX (PDC2022-133146-C21) funded by Plan Nacional I+D+i from the Spanish Government, and project PRECOM (SUBV-00016) funded by the Ministry of Consumer Affairs of the Spanish Government.
Publisher: Springer
Citation: Garrido-Muñoz, I., Martínez-Santiago, F. & Montejo-Ráez, A. MarIA and BETO are sexist: evaluating gender bias in large language models for Spanish. Lang Resources & Evaluation (2023). https://doi.org/10.1007/s10579-023-09670-3
Appears in Collections:DI-Artículos

Files in This Item:
File Description SizeFormat 
2023_MarIAandBETO.pdfPublished version1,82 MBAdobe PDFView/Open


This item is protected by original copyright