Limitaciones del aprendizaje automático en función de la complejidad computacional
Loading...
Official URL
Full text at PDC
Publication date
2025
Authors
Advisors (or tutors)
Rodríguez Laguna, Ismael
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
Este Trabajo de Fin de Grado analiza de forma sistemática las limitaciones que presentan las redes neuronales al enfrentarse a problemas de decisión con diferentes niveles de complejidad computacional. A través de un conjunto cuidadosamente seleccionado de funciones booleanas, se estudia cómo varía la capacidad de aprendizaje en función tanto de la clase de complejidad teórica del problema como de una métrica estructural: la sensibilidad media.
Para ello, se entrenan varias arquitecturas neuronales diferenciadas (con diferente nivel de profundidad) sobre problemas codificados con entradas de 21 bits. Se evalúa su rendimiento y se correlaciona con el grado de inestabilidad inherente a cada función. Los resultados confirman que la complejidad computacional y la estructura interna de la función influyen significativamente en la capacidad de generalización de los modelos. En particular, se identifica que funciones con alta sensibilidad presentan un mayor grado de dificultad para ser aprendidas, incluso si su complejidad teórica es baja. Por el contrario, algunos problemas NP-completos con estructura estable resultan fácilmente aproximables por modelos simples. Este trabajo combina herramientas de teoría de la computación, análisis experimental y aprendizaje automático, ofreciendo una visión integradora que permite predecir con mayor precisión qué hace que un problema sea difícil de aprender para una red neuronal.
This Final Degree Project systematically analyzes the limitations faced by neural networks when dealing with decision problems of varying computational complexity. Using a carefully selected set of Boolean functions, the study investigates how the learning capacity varies depending on both the theoretical complexity class of the problem and a structural metric: average sensitivity. To this end, multiple different neural network architectures (with different depth levels) are trained on problems encoded with 21-bit inputs. Their performance is evaluated and correlated with the inherent instability of each function. The results confirm that both the computational complexity and the internal structure of a function significantly influence the generalization capability of the models. In particular, it is shown that functions with high sensitivity are more difficult to learn, even when their theoretical complexity is low. Conversely, some NP-complete problems with stable structure can be effectively approximated by simple models. This work combines tools from computational theory, experimental analysis, and machine learning to offer an integrative perspective that helps predict more accurately what makes a problem difficult for a neural network to learn.
This Final Degree Project systematically analyzes the limitations faced by neural networks when dealing with decision problems of varying computational complexity. Using a carefully selected set of Boolean functions, the study investigates how the learning capacity varies depending on both the theoretical complexity class of the problem and a structural metric: average sensitivity. To this end, multiple different neural network architectures (with different depth levels) are trained on problems encoded with 21-bit inputs. Their performance is evaluated and correlated with the inherent instability of each function. The results confirm that both the computational complexity and the internal structure of a function significantly influence the generalization capability of the models. In particular, it is shown that functions with high sensitivity are more difficult to learn, even when their theoretical complexity is low. Conversely, some NP-complete problems with stable structure can be effectively approximated by simple models. This work combines tools from computational theory, experimental analysis, and machine learning to offer an integrative perspective that helps predict more accurately what makes a problem difficult for a neural network to learn.
Description
Trabajo de Fin de Doble Grado en Ingeniería Informática y Matemáticas, Facultad de Informática UCM, Departamento de Sistemas Informáticos y Computación, Curso 2024/2025.













