From fuzzy modeling to explanation: aggregating multi-measures fuzzy systems for XAI
Loading...
Download
Full text at PDC
Publication date
2026
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
Citation
Abstract
Explainable Artificial Intelligence seeks to make machine learning models transparent by quantifying feature contributions. A common approach interprets models as cooperative games or fuzzy measures, using the Shapley value to estimate feature importance. However, this methodology has limitations: it could ignore feature interactions, it is computationally expensive, and global averaging can mask instance-level variability. To this aim, we propose a novel framework based on Multi-Measure Fuzzy Systems, where each agent represents a distinct perspective of the model with its own fuzzy measure capturing uncertainty, relevance, and local interactions. We focus on characterizing a representation function that maps the space , which has cardinality 2S, to a tensor space of arbitrary dimension, (specifically, it reduces a fuzzy measure to a tensor space.) These representation functions are subsequently applied within an aggregation methodology developed on the basis of Social Network Analysis to summarize these structures as graphs, with nodes and edges encoding significance, relevance and dependencies. Experimental results illustrate the effectiveness of the proposed framework in capturing complex patterns of feature relevance and interactions, providing a comprehensive and interpretable representation of complex models.













