RT Journal Article T1 Analyzing and interpreting convolutional neural networks using latent space topology A1 López González, Clara Isabel A1 Gómez Silva, María José A1 Besada Portas, Eva A1 Pajares Martínsanz, Gonzalo AB The development of explainability methods for Convolutional Neural Networks (CNNs), under the growing framework of explainable Artificial Intelligence (xAI) for image understanding, is crucial due to neural networks success in contrast with their black box nature. However, usual methods focus on image visualizations and are inadequate to analyze the encoded contextual information (that captures the spatial dependencies of pixels considering their neighbors), as well as to explain the evolution of learning across layers without degrading the information. To address the latter, this paper presents a novel explanatory method based on the study of the latent representations of CNNs through their topology, and supported by Topological Data Analysis (TDA). For each activation layer after a convolution, the pixel values of the activation maps along the channels are considered latent space points. The persistent homology of this data is summarized via persistence landscapes, called Latent Landscapes. This provides a global view of how contextual information is being encoded, its variety and evolution, and allows for statistical analysis. The applicability and effectiveness of our approach is demonstrated by experiments conducted with CNNs trained on distinct datasets: (1) two U-Net segmentation models on RGB and pseudo-multiband images (generated by considering vegetation indices) from the agricultural benchmark CRBD were evaluated, in order to explain the difference in performance; and (2) a VGG-16 classification network on CIFAR-10 (RGB) was analyzed, showing how the information evolves within the network. Moreover, comparisons with state-of-the-art methods (Grad-CAM and occlusion) prove the consistency and validity of our proposal. It offers novel insights into the decision making process and helps to compare how models learn. PB Elsevier SN 0925-2312 YR 2024 FD 2024-08 LK https://hdl.handle.net/20.500.14352/104388 UL https://hdl.handle.net/20.500.14352/104388 LA eng NO López-González CI, Gómez-Silva MJ, Besada-Portas E, Pajares G. Analyzing and interpreting convolutional neural networks using latent space topology. Neurocomputing. 2024 May;593: 127806-19 NO Comunidad Autónoma de Madrid NO Ministerio de Ciencia e Innovación (España) NO European Commission NO Ministerio de Universidades (España) DS Docta Complutense RD 17 abr 2025