Agile gesture recognition for capacitive sensing devices: adapting on-the-job

dc.contributor.authorLiu, Ying
dc.contributor.authorGuo, Liucheng
dc.contributor.authorMakarov Slizneva, Valeriy
dc.contributor.authorHuang, Yuxiang
dc.contributor.authorGorban, Alexander N.
dc.contributor.authorMirkes, Evgeny
dc.contributor.authorTyukin, Ivan Y.
dc.date.accessioned2023-06-22T11:23:08Z
dc.date.available2023-06-22T11:23:08Z
dc.date.issued2023-05-12
dc.description.abstractAutomated hand gesture recognition has been a focus of the AI community for decades. Traditionally, work in this domain revolved largely around scenarios assuming the availability of the flow of images of the user hands. This has partly been due to the prevalence of camera-based devices and the wide availability of image data. However, there is growing demand for gesture recognition technology that can be implemented on low-power devices using limited sensor data instead of high-dimensional inputs like hand images. In this work, we demonstrate a hand gesture recognition system and method that uses signals from capacitive sensors embedded into the etee hand controller. The controller generates real-time signals from each of the wearer five fingers. We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms. The analysis is composed of a two stage training strategy, including dimension reduction through principal component analysis and classification with K nearest neighbour. Remarkably, we found that this combination showed a level of performance which was comparable to more advanced methods such as supervised variational autoencoder. The base system can also be equipped with the capability to learn from occasional errors by providing it with an additional adaptive error correction mechanism. The results showed that the error corrector improve the classification performance in the base system without compromising its performance. The system requires no more than 1 ms of computing time per input sample, and is smaller than deep neural networks, demonstrating the feasibility of agile gesture recognition systems based on this technology.
dc.description.departmentDepto. de Análisis Matemático y Matemática Aplicada
dc.description.facultyFac. de Ciencias Matemáticas
dc.description.refereedFALSE
dc.description.statusunpub
dc.eprint.idhttps://eprints.ucm.es/id/eprint/78620
dc.identifier.urihttps://hdl.handle.net/20.500.14352/72370
dc.language.isoeng
dc.rightsAtribución-CompartirIgual 3.0 España
dc.rights.accessRightsopen access
dc.rights.urihttps://creativecommons.org/licenses/by-sa/3.0/es/
dc.subject.cdu004.8
dc.subject.keywordGesture recognition
dc.subject.keywordError corrector
dc.subject.keywordAdaptive error correction mechanism
dc.subject.keywordKernel trick
dc.subject.keywordEtee
dc.subject.ucmInteligencia artificial (Informática)
dc.subject.ucmCibernética matemática
dc.subject.ucmInvestigación operativa (Matemáticas)
dc.subject.unesco1203.04 Inteligencia Artificial
dc.subject.unesco1207.03 Cibernética
dc.subject.unesco1207 Investigación Operativa
dc.titleAgile gesture recognition for capacitive sensing devices: adapting on-the-job
dc.typejournal article
dcterms.references[1] M. Oudah, A. Al-Naji, and J. Chahl, “Hand Gesture Recognition Based on Computer Vision: A Review of Techniques,” Journal of Imaging, vol. 6, no. 8, p. 73, 7 2020. [2] M. E. Benalcazar, A. G. Jaramillo, Jonathan, A. Zea, A. Paez, and V. H. Andaluz, “Hand gesture recognition using machine learning and the Myo armband,” in 25th European Signal Processing Conference, vol. 2017-Janua. IEEE, 8 2017, pp. 1040–1044. [Online]. Available: http://ieeexplore.ieee.org/document/8081366/ [3] M. Wang, Z. Yan, T. Wang, P. Cai, S. Gao, Y. Zeng, C. Wan, H. Wang, L. Pan, J. Yu, S. Pan, K. He, J. Lu, and X. Chen, “Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors,” Nature Electronics, vol. 3, no. 9, pp. 563–570, 9 2020. [4] A. Moin, A. Zhou, A. Rahimi, A. Menon, S. Benatti, G. Alexandrov, S. Tamakloe, J. Ting, N. Yamamoto, Y. Khan, F. Burghardt, L. Benini, A. C. Arias, and J. M. Rabaey, “A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition,” Nature Electronics, vol. 4, no. 1, pp. 54–63, 1 2021. [5] R. J. Oweis and E. W. Abdulhay, “Seizure classification in EEG signals utilizing Hilbert-Huang transform,” BioMedical Engineering OnLine, vol. 10, no. 1, p. 38, 2011. [6] Y. Park, J. Lee, and J. Bae, “Development of a wearable sensing glove for measuring the motion of fingers using linear potentiometers and flexible wires,” IEEE Transactions on Industrial Informatics, vol. 11, no. 1, pp. 198–206, 2 2015. [7] K. Li, I. M. Chen, S. H. Yeo, and C. K. Lim, “Development of finger-motion capturing device based on optical linear encoder,” Journal of Rehabilitation Research and Development, vol. 48, no. 1, pp. 69–82, 2011. [8] M. Nishiyama and K. Watanabe, “Hetero-Core Fiber-Optic Nerves for Unconstrained Hand Motion Capture,” Instrumentation, vol. 58, no. 12, pp. 3995–4000, 2009. [9] H. Xing, J. Li, B. Hou, Y. Zhang, and M. Guo, “Pedestrian Stride Length Estimation from IMU Measurements and ANN Based Algorithm,” pp. 1–10, 2017. [10] S. Sundaram, P. Kellnhofer, Y. Li, J.-Y. Zhu, A. Torralba, and W. Matusik, “Learning the signatures of the human grasp using a scalable tactile glove,” Nature, vol. 569, no. 7758, pp. 698–702, 5 2019. [11] S. Bambach, S. Lee, D. J. Crandall, and C. Yu, “Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions,” in 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 12 2015, pp. 1949–1957. [12] S. P. Baller, A. Jindal, M. Chadha, and M. Gerndt, “DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices,” in 2021 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 10 2021, pp. 20–30. [13] “etee website.” [Online]. Available: eteexr.com [14] I. T. Jolliffe and J. Cadima, “Principal component analysis: a review and recent developments,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 374, no. 2065, p. 20150202, 4 2016. [15] F. Pedregosa, G. Varoquaux, A. Gramfort, Michel V., B. Thirion, O. Grisel, M. Blondel, Prettenhofer P., R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011. [16] T. M. Cover and P. E. Hart, “Nearest Neighbor Pattern Classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967. [17] W. Xing and Y. Bei, “Medical Health Big Data Classification Based on KNN Classification Algorithm,” IEEE Access, vol. 8, pp. 28 808–28 819, 2020. [18] S. Liang, Y. Ning, H. Li, L. Wang, Z. Mei, Y. Ma, and G. Zhao, “Feature selection and predictors of falls with foot force sensors using KNN-based algorithms,” Sensors (Switzerland), vol. 15, no. 11, pp. 29 393–29 407, 11 2015. [19] L. Pinheiro Cinelli, M. Araujo Marins, E. A. Barros da Silva, and S. Lima Netto, “Variational Autoencoder,” in Variational Methods for Machine Learning with Applications to Deep Networks. Cham: Springer International Publishing, 2021, pp. 111–149. [20] A. N. Gorban and I. Y. Tyukin, “Blessing of dimensionality: mathematical foundations of the statistical physics of data,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2118, p. 20170237, 4 2018. [21] ——, “Stochastic Separation Theorems,” Neural Networks, vol. 94, pp. 255–259, 3 2017. [22] B. Grechuk, A. N. Gorban, and I. Y. Tyukin, “General stochastic separation theorems with optimal bounds,” Neural Networks, vol. 138, pp. 33–56, 6 2021. [23] I. Y. Tyukin, A. N. Gorban, A. A. McEwan, S. Meshkinfamfard, and L. Tang, “Blessing of dimensionality at the edge and geometry of few-shot learning,” Information Sciences, vol. 564, pp. 124–143, 7 2021. [24] A. Gorban, A. Golubkov, B. Grechuk, E. Mirkes, and I. Tyukin, “Correction of AI systems by linear discriminants: Probabilistic foundations,” Information Sciences, vol. 466, pp. 303–322, 10 2018. [25] R. A. FISHER, “THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS,” Annals of Eugenics, vol. 7, no. 2, pp. 179–188, 9 1936. [26] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, ser. Springer Series in Statistics. New York, NY: Springer New York, 2009. [27] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, no. 8, pp. 861–874, 2006
dspace.entity.typePublication
relation.isAuthorOfPublicationa5728eb3-1e14-4d59-9d6f-d7aa78f88594
relation.isAuthorOfPublication.latestForDiscoverya5728eb3-1e14-4d59-9d6f-d7aa78f88594

Download

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
makarov_agile_bysa.pdf
Size:
1.11 MB
Format:
Adobe Portable Document Format

Collections