Interpretability challenges in machine learning models
Loading...
Full text at PDC
Publication date
2021
Advisors (or tutors)
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Marín Díaz, G., Carrasco González, R. A., & Gómez González, D. (2021). Interpretability challenges in machine learning models. En J. Pelegrín-Borondo, M. Arias-Oliva, K. Murata, & A. M. Lara Palma (Eds.), Moving technology ethics at the forefront of society, organisations and governments (pp. 205–217). Universidad de La Rioja. https://dialnet.unirioja.es/servlet/articulo?codigo=8036858
Abstract
Decisions based on Machine Learning (ML) algorithms are having an increasingly significant social impact; however, most of these systems are based on black box algorithms, models whose rules are not understandable to humans. On the other hand, different public and private organisations, as well as the scientific community, have recognised the problem of interpretability, focusing on the development of interpretable models (white box) or on methods that allow the explanation of black box models.
The aim of this article is to propose a review of the historical evolution and current state of Machine Learning algorithms, analysing the need for interpretability. In this sense, the challenges of interpretability will be addressed from different points of view: in the field of research, legal, industry and regulatory bodies.













