Inteligencia Artificial Explicable para sistemas de gestión de incidencias
Loading...
Official URL
Full text at PDC
Publication date
2024
Authors
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
El campo de la Inteligencia Artificial ha sufrido una gran revolución en los últimos años. Con la mejora de las técnicas y los resultados obtenidos con ellas, son muchas las empresas que han buscado incorporar la Inteligencia Artificial como uno de sus pilares fundamentales. Debido a ello, este trabajo de fin de Máster busca utilizar técnicas de Inteligencia Artificial en el campo empresarial. Concretamente con el fin de obtener información relevante que ayude en la gestión de la resolución de incidencias TI de la empresa Bosch. El proyecto se divide en dos objetivos principales. El primero consiste en la construcción y optimización de modelos de regresión con el fin de estimar los tiempos de resolución de las incidencias. El segundo objetivo, consiste en la creación de modelos de clasificación para estimar si una incidencia tendrá una resolución correcta, es decir de menos de un día, o anómala, si se tardará más de un día en corregir.
La creación de estos modelos irá acompañada del uso de técnicas de Inteligencia Artificial explicable, las cuales permiten dotar de interpretabilidad a los algoritmos de caja negra. Gracias a esta capacidad de interpretación, seremos capaces de comprender el funcionamiento de los modelos de aprendizaje automático y entender por qué toman las decisiones, lo cual no sería posible sin el uso de la IA explicable. Conocer esta información, nos permite realizar cambios en el modelo en función de la información adquirida y entender mejor nuestro conjunto de datos, permitiéndonos tomar decisiones estratégicas con el conocimiento obtenido.
The field of Artificial Intelligence has undergone a major revolution in recent years. With the improvement of techniques and the results, many companies have looked to incorporate Artificial Intelligence as one of their fundamental pillars. Because of this, this Master’s thesis seeks to use Artificial Intelligence techniques in the business field. Specifically with the aim of obtaining relevant information to help in the management of IT incident resolution in the company Bosch. The project is divided into two main objectives. The first consists of the construction and optimisation of regression models in order to estimate incident resolution times. The second objective consists of the creation of classification models to estimate whether an incident will have a correct resolution, which means less than one day, or an anomalous one, if it will take more than one day to correct. The creation of these models will be accompanied by the use of explainable AI techniques, which allow us to provide interpretability to the black box algorithms. Thanks to this interpretability, we will be able to understand how the machine learning models work and why they make decisions, which would not be possible without the use of explainable AI. Knowing this information allows us to make changes to the model based on the information acquired and to better understand our dataset, permiting us to make strategic decisions with the knowledge gained.
The field of Artificial Intelligence has undergone a major revolution in recent years. With the improvement of techniques and the results, many companies have looked to incorporate Artificial Intelligence as one of their fundamental pillars. Because of this, this Master’s thesis seeks to use Artificial Intelligence techniques in the business field. Specifically with the aim of obtaining relevant information to help in the management of IT incident resolution in the company Bosch. The project is divided into two main objectives. The first consists of the construction and optimisation of regression models in order to estimate incident resolution times. The second objective consists of the creation of classification models to estimate whether an incident will have a correct resolution, which means less than one day, or an anomalous one, if it will take more than one day to correct. The creation of these models will be accompanied by the use of explainable AI techniques, which allow us to provide interpretability to the black box algorithms. Thanks to this interpretability, we will be able to understand how the machine learning models work and why they make decisions, which would not be possible without the use of explainable AI. Knowing this information allows us to make changes to the model based on the information acquired and to better understand our dataset, permiting us to make strategic decisions with the knowledge gained.
Description
Trabajo de Fin de Máster en Ingeniería Informática, Facultad de Informática UCM, Departamento de Ingeniería de Software e Inteligencia Artificial (ISIA), Curso 2023/2024.