Veuillez utiliser cette adresse pour citer ce document : https://hdl.handle.net/10953/1991
Titre: A Survey on Bias in Deep NLP
Auteur(s): Garrido-Muñoz, Ismael
Montejo-Ráez, Arturo
Martínez-Santiago, Fernando
Ureña-López, L. Alfonso
Résumé: Deep neural networks are hegemonic approaches to many machine learning areas, including natural language processing (NLP). Thanks to the availability of large corpora collections and the capability of deep architectures to shape internal language mechanisms in self-supervised learning processes (also known as “pre-training”), versatile and performing models are released continuously for every new network design. These networks, somehow, learn a probability distribution of words and relations across the training collection used, inheriting the potential flaws, inconsistencies and biases contained in such a collection. As pre-trained models have been found to be very useful approaches to transfer learning, dealing with bias has become a relevant issue in this new scenario. We introduce bias in a formal way and explore how it has been treated in several networks, in terms of detection and correction. In addition, available resources are identified and a strategy to deal with bias in deep NLP is proposed.
Mots-clés: natural language processing
deep learning
biased models
Date de publication: 2-avr-2021
metadata.dc.description.sponsorship: This study is partially funded by the Spanish Government under the LIVING-LANG project (RTI2018-094653-B-C21).
Editeur: MDPI
Référence bibliographique: Garrido-Muñoz , I.; Montejo-Ráez , A.; Martínez-Santiago , F.; Ureña-López , L.A. A Survey on Bias in Deep NLP. Appl. Sci. 2021, 11, 3184. https://doi.org/10.3390/app11073184
Collection(s) :DI-Artículos

Fichier(s) constituant ce document :
Fichier Description TailleFormat 
2021_SurveyBiasDeepNLP.pdfPublished version399,7 kBAdobe PDFVoir/Ouvrir


Ce document est protégé par copyright


Ce document est autorisé sous une licence de type Licence Creative Commons
Creative Commons