RT Conference Proceedings T1 Interpretability challenges in machine learning models A1 Marín Díaz, Gabriel A1 Carrasco González, Ramón Alberto A1 Gómez González, Daniel A2 Pelebrín Borondo, Jorge A2 Arias Oliva, Mario A2 Murata, Kiyoshi A2 Lara Palma, Ana María AB Decisions based on Machine Learning (ML) algorithms are having an increasingly significant social impact; however, most of these systems are based on black box algorithms, models whose rules are not understandable to humans. On the other hand, different public and private organisations, as well as the scientific community, have recognised the problem of interpretability, focusing on the development of interpretable models (white box) or on methods that allow the explanation of black box models.The aim of this article is to propose a review of the historical evolution and current state of Machine Learning algorithms, analysing the need for interpretability. In this sense, the challenges of interpretability will be addressed from different points of view: in the field of research, legal, industry and regulatory bodies. SN 978-84-09-28672-0 YR 2021 FD 2021-07-01 LK https://hdl.handle.net/20.500.14352/130453 UL https://hdl.handle.net/20.500.14352/130453 LA eng NO Marín Díaz, G., Carrasco González, R. A., & Gómez González, D. (2021). Interpretability challenges in machine learning models. En J. Pelegrín-Borondo, M. Arias-Oliva, K. Murata, & A. M. Lara Palma (Eds.), Moving technology ethics at the forefront of society, organisations and governments (pp. 205–217). Universidad de La Rioja. https://dialnet.unirioja.es/servlet/articulo?codigo=8036858 DS Docta Complutense RD 26 feb 2026