Aproximación de funciones mediante redes neuronales
Loading...
Official URL
Full text at PDC
Publication date
2025
Authors
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Abstract
La aproximación de funciones es un problema fundamental en matemáticas e ingeniería para el cual las redes neuronales artificiales constituyen una herramienta muy potente. La capacidad teórica de estas arquitecturas reside en su propiedad como aproximadores universales. Resultados fundamentales en este campo establecen que, bajo ciertas condiciones, estas redes pueden emular una gran variedad de funciones. Sin embargo, la mayoría de las demostraciones clásicas de este teorema son de carácter no constructivo, es decir, aseguran la existencia de una red sin proporcionar un método explícito para hallarla. En este proyecto se realiza un estudio exhaustivo de esta capacidad, abordando el problema desde la teoría hasta la práctica. Inicialmente, se profundiza en los fundamentos matemáticos de los teoremas de aproximación de Cybenko y Hornik. Posteriormente, se analizan en detalle varios resultados constructivos que ofrecen un procedimiento paso a paso para definir la arquitectura y los parámetros de una red neuronal que logre un error de aproximación previamente acotado. Finalmente, estos métodos constructivos se implementan y validan mediante una serie de experimentos computacionales en Python, enfocados en analizar la convergencia de las aproximaciones y el rendimiento relativo de funciones de activación como la Heaviside y la logística. El proyecto culmina con el desarrollo de una aplicación web interactiva que permite a los usuarios explorar y visualizar estos conceptos de forma dinámica.
Function approximation is a fundamental problem in mathematics and engineering, for which artificial neural networks have emerged as a powerful tool. The theoretical capability of these architectures is rooted in their property as universal approximators. Fundamental results in this field establish that, under certain conditions, these networks can approximate a wide variety of functions. However, most classical proofs of these theorems are non-constructive in nature; that is, they ensure the existence of a suitable network without providing an explicit method to construct it. This project provides a comprehensive study of this capability, addressing the problem from theory to practice. Initially, it delves into the mathematical foundations of the approximation theorems by Cybenko and Hornik. Subsequently, it analyzes in detail some constructive results that offer a step-by-step procedure to define the architecture and parameters of a neural network to achieve a predefined error bound. Finally, these constructive methods are implemented and validated through a series of computational experiments in Python, focused on analyzing the convergence of the approximations and the relative performance of activation functions such as the Heaviside and logistic functions. The project culminates in the development of an interactive web application that allows users to dynamically explore and visualize these concepts.
Function approximation is a fundamental problem in mathematics and engineering, for which artificial neural networks have emerged as a powerful tool. The theoretical capability of these architectures is rooted in their property as universal approximators. Fundamental results in this field establish that, under certain conditions, these networks can approximate a wide variety of functions. However, most classical proofs of these theorems are non-constructive in nature; that is, they ensure the existence of a suitable network without providing an explicit method to construct it. This project provides a comprehensive study of this capability, addressing the problem from theory to practice. Initially, it delves into the mathematical foundations of the approximation theorems by Cybenko and Hornik. Subsequently, it analyzes in detail some constructive results that offer a step-by-step procedure to define the architecture and parameters of a neural network to achieve a predefined error bound. Finally, these constructive methods are implemented and validated through a series of computational experiments in Python, focused on analyzing the convergence of the approximations and the relative performance of activation functions such as the Heaviside and logistic functions. The project culminates in the development of an interactive web application that allows users to dynamically explore and visualize these concepts.