Marín Díaz, GabrielCarrasco González, Ramón AlbertoGómez González, DanielPelebrín Borondo, JorgeArias Oliva, MarioMurata, KiyoshiLara Palma, Ana María2026-01-162026-01-162021-07-01Marín Díaz, G., Carrasco González, R. A., & Gómez González, D. (2021). Interpretability challenges in machine learning models. En J. Pelegrín-Borondo, M. Arias-Oliva, K. Murata, & A. M. Lara Palma (Eds.), Moving technology ethics at the forefront of society, organisations and governments (pp. 205–217). Universidad de La Rioja. https://dialnet.unirioja.es/servlet/articulo?codigo=8036858978-84-09-28672-0https://hdl.handle.net/20.500.14352/130453Decisions based on Machine Learning (ML) algorithms are having an increasingly significant social impact; however, most of these systems are based on black box algorithms, models whose rules are not understandable to humans. On the other hand, different public and private organisations, as well as the scientific community, have recognised the problem of interpretability, focusing on the development of interpretable models (white box) or on methods that allow the explanation of black box models. The aim of this article is to propose a review of the historical evolution and current state of Machine Learning algorithms, analysing the need for interpretability. In this sense, the challenges of interpretability will be addressed from different points of view: in the field of research, legal, industry and regulatory bodies.engInterpretability challenges in machine learning modelsconference paperhttps://dialnet.unirioja.es/servlet/articulo?codigo=8036858https://dialnet.unirioja.es/servlet/libro?codigo=829454open access004.85519.8510.617Machine LearningInterpretabilityDeep LearningBiasArtificial IntelligenceInteligencia artificial (Informática)Inteligencia artificial (Filosofía)ÉticaInvestigación operativa (Estadística)1203.04 Inteligencia Artificial1102.08 Lógica Matemática5311.07 Investigación Operativa