Human-Centric multimodal machine learning: recent advances and testbed on AI-based recruitment
Entity
UAM. Departamento de Tecnología Electrónica y de las ComunicacionesPublisher
Springer NatureDate
2023-06-07Citation
10.1007/s42979-023-01733-0
SN Computer Science 4 (2023): 434
ISSN
2661-8907DOI
10.1007/s42979-023-01733-0Funded by
This work has received funding from different projects, including BBforTAI (PID2021-127641OB-I00 MICINN/FEDER), HumanCAIC (TED2021-131787B-I00), TRESPASS-ETN (MSCA-ITN-2019-860813), and PRIMA (MSCA-ITN-2019-860315). The work of A. Peña is currently supported by a FPU Fellowship (FPU21/00535) by the Spanish MIU and was supported by Madrid Government (PRICIT(2020/00334/001)) during the elaboration of this work. Also, I. Serna is supported by a FPI Fellowship from the UAMProject
Gobierno de España. PID2021-127641OB-I00; Gobierno de España. TED2021-131787B-I00; info:eu-repo/grantAgreement/EC/H2020/860813; info:eu-repo/grantAgreement/EC/H2020/860315Editor's Version
https://doi.org/10.1007/s42979-023-01733-0Subjects
Automated recruitment; Bias; Biometrics; Computer vision; Deep learning; FairCV; Fairness; Multimodal; Natural language processing; TelecomunicacionesRights
© The Author(s) 2023Abstract
The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. There is a certain consensus about the need to develop AI applications with a Human-Centric approach. Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes. All these four Human-Centric requirements are closely related to each other. With the aim of studying how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, we propose a fictitious case study focused on automated recruitment: FairCVtest. We train automatic recruitment algorithms using a set of multimodal synthetic profiles including image, text, and structured data, which are consciously scored with gender and racial biases. FairCVtest shows the capacity of the Artificial Intelligence (AI) behind automatic recruitment tools built this way (a common practice in many other application scenarios beyond recruitment) to extract sensitive information from unstructured data and exploit it in combination to data biases in undesirable (unfair) ways. We present an overview of recent works developing techniques capable of removing sensitive information and biases from the decision-making process of deep learning architectures, as well as commonly used databases for fairness research in AI. We demonstrate how learning approaches developed to guarantee privacy in latent spaces can lead to unbiased and fair automatic decision-making process. Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems
Files in this item
Google Scholar:Peña Almansa, Alejandro
-
Serna Cabello, José Ignacio de la
-
Morales Moreno, Aythami
-
Fiérrez Aguilar, Julián
-
Ortega de la Puente, Alfonso
-
Herrarte Sánchez, Ainhoa
-
Alcántara Pla, Manuel
-
Ortega García, Javier
This item appears in the following Collection(s)
Related items
Showing items related by title, author, creator and subject.