Inteligencia Artificial Explicable para el reconocimiento de fracturas de cadera
Loading...
Download
Official URL
Full text at PDC
Publication date
2024
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
Con el rápido crecimiento de la Inteligencia Artificial (IA) a nivel mundial, se ha puesto de manifiesto la necesidad de que los modelos de IA sean confiables y transparentes, condición esencial para que sean adoptados de forma generalizada en ámbitos como el sanitario, financiero o legislativo, que involucran la toma de decisiones críticas que pueden afectar a la vida de las personas. Los profesionales necesitan estar seguros de que las decisiones automatizadas son correctas, justas y libres de sesgos. Además, la interpretabilidad de los modelos de IA es fundamental para la identificación y mitigación de errores, especialmente aquellos basados en redes neuronales profundas, que a menudo funcionan como una “caja negra”, donde los procesos internos que llevan a una decisión son opacos y difíciles de interpretar. Esta necesidad ha dado lugar al desarrollo de la Inteligencia Artificial eXplicable (o XAI, por sus siglas en inglés), que se enfoca en la precisión de los modelos, pero también en su interpretabilidad y en la capacidad de los usuarios para comprender y confiar en las decisiones tomadas por estos sistemas.
Este trabajo se centra en la aplicación de la IA para resolver el problema de la clasificación automática de fracturas de cadera en radiografías, que representa un desafío relevante, puesto que ciertos tipos de fracturas pueden pasar desapercibidos en una primera evaluación clínica. En él, presentamos un modelo basado en redes neuronales convolucionales (CNN) para la clasificación de fracturas de cadera. Además, exploramos diversos métodos de explicación dentro del marco de la XAI, que tienen el potencial de ser de gran utilidad en una aplicación clínica, tanto para los médicos especialistas como para los propios pacientes.
With the rapid global growth of Artificial Intelligence (AI), the need for AI models to be reliable and transparent has become evident. This is essential for their widespread adoption in critical decision-making fields such as healthcare, finance, and legislation. Professionals need to be assured that automated decisions are accurate, fair, and free from biases. Moreover, the interpretability of AI models is fundamental for the identification and mitigation of errors, especially those based on deep neural networks, which often function as “black boxes”, where the internal processes leading to a decision are opaque and difficult to interpret. This necessity has led to the development of eXplainable Artificial Intelligence (XAI), which focuses not only on the accuracy of models but also on their interpretability and the ability of users to understand and trust the decisions made by these systems. This work focuses on the specific application of AI to the automatic classification of hip fractures in X-rays, which represent a significant challenge as certain types of fractures can go unnoticed during an initial clinical evaluation. We present a model based on Convolutional Neural Networks (CNN) for the classification of hip fractures. Additionally, we explore various explanation methods within the XAI framework, which have the potential to be highly useful in a clinical setting, both for specialist physicians and for patients themselves.
With the rapid global growth of Artificial Intelligence (AI), the need for AI models to be reliable and transparent has become evident. This is essential for their widespread adoption in critical decision-making fields such as healthcare, finance, and legislation. Professionals need to be assured that automated decisions are accurate, fair, and free from biases. Moreover, the interpretability of AI models is fundamental for the identification and mitigation of errors, especially those based on deep neural networks, which often function as “black boxes”, where the internal processes leading to a decision are opaque and difficult to interpret. This necessity has led to the development of eXplainable Artificial Intelligence (XAI), which focuses not only on the accuracy of models but also on their interpretability and the ability of users to understand and trust the decisions made by these systems. This work focuses on the specific application of AI to the automatic classification of hip fractures in X-rays, which represent a significant challenge as certain types of fractures can go unnoticed during an initial clinical evaluation. We present a model based on Convolutional Neural Networks (CNN) for the classification of hip fractures. Additionally, we explore various explanation methods within the XAI framework, which have the potential to be highly useful in a clinical setting, both for specialist physicians and for patients themselves.
Description
Trabajo de Fin de Doble Grado en Ingeniería Informática y Matemáticas, Facultad de Informática UCM, Departamento de Ingeniería del Software e Inteligencia Artificial, Curso 2023/2024