Aviso: para depositar documentos, por favor, inicia sesión e identifícate con tu cuenta de correo institucional de la UCM con el botón MI CUENTA UCM. No emplees la opción AUTENTICACIÓN CON CONTRASEÑA Disculpen las molestias.
 

Robust SVM Classification with lp-Quasi-Norm Feature Selection

Loading...
Thumbnail Image

Official URL

Full text at PDC

Publication date

2025

Advisors (or tutors)

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

Citations
Google Scholar

Citation

Abstract

This study presents a robust classification framework with embedded feature selection to tackle challenges in high-dimensional datasets. By utilizing lp-quasi-norms (p in (0,1)), the framework achieves sparse classifiers that are robust to random input perturbations. It extends existing models like MEMPM and CD-LeMa to their lp-regularized versions, with traditional l2-regularizations serving as benchmarks to evaluate trade-offs between sparsity and predictive performance. To address computational challenges, a novel Diagonal Two-Step Algorithm is introduced, combining convex approximations and iterative parameter updates for efficient and stable optimization. The proposed methods are validated on benchmark datasets using four classification models and two feature elimination techniques: Direct Feature Elimination and Recursive Feature Elimination. Results demonstrate the influence of the norm parameter p on classification balance accuracy, feature selection, robustness, and computational efficiency. This comprehensive framework provides practical tools and insights for designing efficient and robust classifiers for high-dimensional applications.

Research Projects

Organizational Units

Journal Issue

Description

Keywords

Collections