Layer factor analysis in convolutional neural networks for explainability

dc.contributor.authorLópez González, Clara Isabel
dc.contributor.authorGómez Silva, María José
dc.contributor.authorBesada Portas, Eva
dc.contributor.authorPajares Martínsanz, Gonzalo
dc.date.accessioned2024-02-02T15:38:06Z
dc.date.available2024-02-02T15:38:06Z
dc.date.issued2024-01
dc.descriptionSe trata de un acuerdo transformativo.
dc.description.abstractExplanatory methods that focus on the analysis of the features encoded by Convolutional Neural Networks (CNNs) are of great interest, since they help to understand the underlying process hidden behind the black-box nature of these models. However, to explain the knowledge gathered in a given layer, they must decide which of the numerous filters to study, further assuming that each of them corresponds to a single feature. This, coupled with the redundancy of information, makes it difficult to ensure that the relevant characteristics are being analyzed. The above represents an important challenge and defines the aim and scope of our proposal. In this paper we present a novel method, named Explainable Layer Factor Analysis for CNNs (ELFA-CNNs), which models and describes with quality convolutional layers relying on factor analysis. Regarding contributions, ELFA obtains the essential underlying features, together with their correlation with the original filters, providing an accurate and well-founded summary. Through the factorial parameters we gain insights about the information learned, the connections between channels, and the redundancy of the layer, among others. To provide visual explanations in a similarly way to other methods, two additional proposals are made: a) Essential Feature Attribution Maps (EFAM) and b) intrinsic features inversion. The results prove the effectiveness of the developed general methods. They are evaluated in different CNNs (VGG-16, ResNet-50, and DeepLabv3+) on generic datasets (CIFAR-10, imagenette, and CamVid). We demonstrate that convolutional layers adequately fit a factorial model thanks to the new metrics presented for factor and fitting residuals (D1, D>, and Res, derive from covariance matrices). Moreover, knowledge about the deep image representations and the learning process is acquired, as well as reliable heat maps highlighting regions where essential features are located. This study effectively provides an explainable approach that can be applied to different CNNs and over different datasets.
dc.description.departmentDepto. de Ingeniería de Software e Inteligencia Artificial (ISIA)
dc.description.departmentSección Deptal. de Arquitectura de Computadores y Automática (Físicas)
dc.description.facultyFac. de Informática
dc.description.facultyFac. de Ciencias Físicas
dc.description.refereedTRUE
dc.description.sponsorshipComunidad Autónoma de Madrid
dc.description.sponsorshipMinisterio de Ciencia e Innovación (España)
dc.description.sponsorshipUnión Europea NextGeneration
dc.description.sponsorshipMinisterio de Universidades (España)
dc.description.statuspub
dc.identifier.citationLópez-González CI, Gómez-Silva MJ, Besada-Portas E, Pajares G. Layer factor analysis in convolutional neural networks for explainability. Applied Soft Computing. 2024 Jan;150:111094-111
dc.identifier.doi10.1016/j.asoc.2023.111094
dc.identifier.issn1568-4946
dc.identifier.officialurlhttps://www.sciencedirect.com/science/article/pii/S1568494623011122
dc.identifier.urihttps://hdl.handle.net/20.500.14352/98403
dc.journal.titleApplied Soft Computing
dc.language.isoeng
dc.page.final111111
dc.page.initial111094
dc.publisherElsevier
dc.relation.projectIDinfo:eu-repo/grantAgreement/CAM/PRICIT/Y2020/TCS-6420//Hacia un sistema Integral para la Alerta y Gestión de BLOOMs de cianobacterias en aguas continentales/IA-GES-BLOOM-CM
dc.relation.projectIDinfo:eu-repo/grantAgreement/MCIN/AEI/10.13039/501100011033/EU/PRTR/TED2021-130123B-I00//Más allá del uso de tecnologías digitales en blooms de cianobacterias: gestión inteligente de cianobacterias mediante el uso de gemelos digitales y computación en el borde/SMART-BLOOMS
dc.relation.projectIDinfo:eu-repo/grantAgreement/MCIN/PEICTI/PID2021-127648OB-C33/ES/Cooperación de vehículos de superficie y aéreos para aplicaciones de inspección en entornos cambiantes/INSERTION
dc.rightsAttribution 4.0 Internationalen
dc.rights.accessRightsopen access
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.cdu004.8
dc.subject.cdu004.85
dc.subject.cdu004.932
dc.subject.cdu004.032.26
dc.subject.keywordDeep learning
dc.subject.keywordExplainable Artificial Intelligence (xAI)
dc.subject.keywordStatistical modeling
dc.subject.keywordVisual explanation
dc.subject.keywordFeature learning
dc.subject.keywordAttribution map
dc.subject.ucmInteligencia artificial (Informática)
dc.subject.unesco1203.17 Informática
dc.titleLayer factor analysis in convolutional neural networks for explainability
dc.typejournal article
dc.type.hasVersionVoR
dc.volume.number150
dspace.entity.typePublication
relation.isAuthorOfPublication779a7137-78a8-46a7-81e0-58b8bd5f1748
relation.isAuthorOfPublication0acc96fe-6132-45c5-ad71-299c9dcb6682
relation.isAuthorOfPublication878e090e-a59f-4f17-b5a2-7746bed14484
relation.isAuthorOfPublication.latestForDiscovery779a7137-78a8-46a7-81e0-58b8bd5f1748
Download
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Layer factor analysis in cnn for explainability.pdf
Size:
8.92 MB
Format:
Adobe Portable Document Format
Collections