Combining the user friendliness of SignWriting with the precision of linguistic parameters
Loading...
Official URL
Full text at PDC
Publication date
2025
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Citation
Antonio F. G. Sevilla, José María Lahoz-Bengoechea, Sandra Conde González, Alberto Diaz Esteban, Pablo Folgueira Galán, and Julia de la Calle Pérez. 2025. Combining the user friendliness of SignWriting with the precision of linguistic parameters. In ACM International Conference on Intelligent Virtual Agents (IVA Adjunct ’25), September 16–19, 2025, Berlin, Germany. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3742886.3756740
Abstract
This paper presents TraduSE, a Progressive Web Application designed to bridge the gap between the user-friendliness of SignWriting and the precision required for sign language processing tasks. Capturing sign language complexity is crucial, but parameter-based approaches often have a steep learning curve and require specialized tools. SignWriting offers a more accessible alternative for representing signs, and TraduSE addresses the challenge of making linguistically well-informed systems accessible to signers, learners, and researchers by using SignWriting as an input for a native sign language dictionary: the Spanish Sign Language Signary.It does so by allowing users to upload or draw SignWriting images, which are recognized by an existing artificial vision pipeline. The SignWriting elements found are then converted into a parametric description, recovering along the way the 3D signing space information from the 2D SignWriting. Finally, the resulting parameters are used to search the Spanish Sign Language Signary. Fuzzy matching allows for error resilience and finding similar tokens, enabling users to locate signs even with incomplete or imprecise information. Search results are presented as videos and Spanish glosses, while also displaying and explaining the linguistic parameters, serving as a didactic tool.Our approach not only provides a crucial bridge between visual sign representation and precise linguistic analysis, widening the reach of sign language research, but also demonstrates flexibility in handling the inherent fuzziness and potential errors in everyday language use and user input.
Description
This publication is part of the R&D&I project HumanAI-UI, Grant PID2023-148577OB-C22 (Human-Centered AI: User-Driven Adaptative Interfaces-HumanAI-UI) funded by MICIU/AEI/10.13039/501100011033 and by FEDER/UE.













