Carrasco, MiguelLópez, JulioIvorra, Benjamín Pierre PaulMarechal, MatthieuRamos Del Olmo, Ángel Manuel2025-02-142025-02-142025https://hdl.handle.net/20.500.14352/118104This study presents a robust classification framework with embedded feature selection to tackle challenges in high-dimensional datasets. By utilizing lp-quasi-norms (p in (0,1)), the framework achieves sparse classifiers that are robust to random input perturbations. It extends existing models like MEMPM and CD-LeMa to their lp-regularized versions, with traditional l2-regularizations serving as benchmarks to evaluate trade-offs between sparsity and predictive performance. To address computational challenges, a novel Diagonal Two-Step Algorithm is introduced, combining convex approximations and iterative parameter updates for efficient and stable optimization. The proposed methods are validated on benchmark datasets using four classification models and two feature elimination techniques: Direct Feature Elimination and Recursive Feature Elimination. Results demonstrate the influence of the norm parameter p on classification balance accuracy, feature selection, robustness, and computational efficiency. This comprehensive framework provides practical tools and insights for designing efficient and robust classifiers for high-dimensional applications.engAttribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/Robust SVM Classification with lp-Quasi-Norm Feature Selectionjournal articleopen accessSupport Vector MachinesLP-quasi-normDirect Feature EliminationInvestigación operativa (Matemáticas)Inteligencia artificial (Informática)1207 Investigación Operativa1203.04 Inteligencia Artificial