%0 Book Section %T The pivotal role of interpretability in employee attrition prediction and decision-making publisher Universidad de La Rioja %D 2024 %U 978-84-09-58161-0 %@ https://hdl.handle.net/20.500.14352/130072 %X This article explores the evolution of machine learning (ML) algorithms, emphasizing the growing importance of interpretability in understanding automated decisions. Progress from early to advanced ML models highlights the need for better performance and adaptability. However, the inherent black-box nature of many ML algorithms raises challenges, underscoring the necessity for interpretability to improve transparency and accountability.Examining the evolution of interpretability in ML, the article showcases advancements in techniques facilitating human comprehension of decision-making processes. As ML becomes integral across domains, the article underscores the importance of interpretable models to bridge the gap between automated decisions and human understanding.The article delves into the changing role of humans in decision-making. Despite the efficiency of ML algorithms, the interpretability factor prompts a revaluation of human involvement, necessitating a balanced approach for ethical AI deployment.Furthermore, the article explores integrating decision-making methods like Analytic Hierarchy Process (AHP) to enhance interpretability. Proposing a framework that combines AHP with interpretable ML models, it suggests a structured approach for human-in-the-loop decision-making while considering feature importance. %~