Sign language dactyl recognition based on machine learning algorithms
DOI:
https://doi.org/10.15587/1729-4061.2021.239253Keywords:
Gesture recognition, sign language, feature extraction, hand tracking, algorithm evaluationAbstract
In the course of our research work, the American, Russian and Turkish sign languages were analyzed. The program of recognition of the Kazakh dactylic sign language with the use of machine learning methods is implemented. A dataset of 5000 images was formed for each gesture, gesture recognition algorithms were applied, such as Random Forest, Support Vector Machine, Extreme Gradient Boosting, while two data types were combined into one database, which caused a change in the architecture of the system as a whole. The quality of the algorithms was also evaluated.
The research work was carried out due to the fact that scientific work in the field of developing a system for recognizing the Kazakh language of sign dactyls is currently insufficient for a complete representation of the language. There are specific letters in the Kazakh language, because of the peculiarities of the spelling of the language, problems arise when developing recognition systems for the Kazakh sign language.
The results of the work showed that the Support Vector Machine and Extreme Gradient Boosting algorithms are superior in real-time performance, but the Random Forest algorithm has high recognition accuracy. As a result, the accuracy of the classification algorithms was 98.86 % for Random Forest, 98.68 % for Support Vector Machine and 98.54 % for Extreme Gradient Boosting. Also, the evaluation of the quality of the work of classical algorithms has high indicators.
The practical significance of this work lies in the fact that scientific research in the field of gesture recognition with the updated alphabet of the Kazakh language has not yet been conducted and the results of this work can be used by other researchers to conduct further research related to the recognition of the Kazakh dactyl sign language, as well as by researchers, engaged in the development of the international sign language
References
- Ahmed, M. A., Zaidan, B. B., Zaidan, A. A., Salih, M. M., Lakulu, M. M. bin. (2018). A Review on Systems-Based Sensory Gloves for Sign Language Recognition State of the Art between 2007 and 2017. Sensors, 18 (7), 2208. doi: http://doi.org/10.3390/s18072208
- Bilgin, M., Mutludogan, K. (2019). American Sign Language Character Recognition with Capsule Networks. 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). Ankara. doi: http://doi.org/10.1109/ismsit.2019.8932829
- Kudubayeva, S., Zhussupova, B., Aliyeva, G. (2019). Features of the representation of the Kazakh sign language with the use of gestural notation. Proceedings of the 5th International Conference on Engineering and MIS. doi: http://doi.org/10.1145/3330431.3330440
- Luqman, H., El-Alfy, E.-S. M., BinMakhashen, G. M. (2020). Joint space representation and recognition of sign language fingerspelling using Gabor filter and convolutional neural network. Multimedia Tools and Applications, 80 (7), 10213–10234. doi: http://doi.org/10.1007/s11042-020-09994-0
- Saykol, E., Türe, H. T., Sirvanci, A. M., Turan, M. (2016). Posture labeling based gesture classification for Turkish sign language using depth values. Kybernetes, 45 (4), 604–621. doi: http://doi.org/10.1108/k-04-2015-0107
- Van Houdt, G., Mosquera, C., Nápoles, G. (2020). A review on the long short-term memory model. Artificial Intelligence Review, 53 (8), 5929–5955. doi: http://doi.org/10.1007/s10462-020-09838-1
- Makarov, I., Chertkov, M., Veldyaykin, N., Pokoev, A. (2019). American and Russian sign language dactyl recognition. ACM International Conference Proceeding Series, 204–210. doi: http://doi.org/10.1109/tsp.2019.8768868
- Artyukhin, S. G., Mestetskiy, L. M. (2015). Dactyl alphabet gesture recognition in a video sequence using microsoft kinect. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-5/W6, 83–86. doi: http://doi.org/10.5194/isprsarchives-xl-5-w6-83-2015
- Oktekin, B. (2018). Development of Turkish sign language recognition application. Nicosia.
- Wadhawan, A., Kumar, P. (2019). Sign Language Recognition Systems: A Decade Systematic Literature Review. Archives of Computational Methods in Engineering, 28 (3), 785–813. doi: http://doi.org/10.1007/s11831-019-09384-2
- Munib, Q., Habeeb, M., Takruri, B., Al-Malik, H. A. (2007). American sign language (ASL) recognition based on Hough transform and neural networks. Expert Systems with Applications, 32 (1), 24–37. doi: http://doi.org/10.1016/j.eswa.2005.11.018
- Oz, C., Leu, M. C. (2007). Linguistic properties based on American Sign Language isolated word recognition with artificial neural networks using a sensory glove and motion tracker. Neurocomputing, 70 (16-18), 2891–2901. doi: http://doi.org/10.1016/j.neucom.2006.04.016
- Oz, C., Leu, M. C. (2011). American Sign Language word recognition with a sensory glove using artificial neural networks. Engineering Applications of Artificial Intelligence, 24 (7), 1204–1213. doi: http://doi.org/10.1016/j.engappai.2011.06.015
- Sun, C., Zhang, T., Bao, B.-K., Xu, C., Mei, T. (2013). Discriminative Exemplar Coding for Sign Language Recognition With Kinect. IEEE Transactions on Cybernetics, 43 (5), 1418–1428. doi: http://doi.org/10.1109/tcyb.2013.2265337
- Chuan, C.-H., Regina, E., Guardino, C. (2014). American Sign Language Recognition Using Leap Motion Sensor. 2014 13th International Conference on Machine Learning and Applications, 541–544. doi: http://doi.org/10.1109/icmla.2014.110
- Tangsuksant, W., Adhan, S., Pintavirooj, C. (2014). American Sign Language recognition by using 3D geometric invariant feature and ANN classification. The 7th 2014 Biomedical Engineering International Conference. doi: http://doi.org/10.1109/bmeicon.2014.7017372
- Zamani, M., Kanan, H. R. (2014). Saliency based alphabet and numbers of American sign language recognition using linear feature extraction. 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE), 398–403. doi: http://doi.org/10.1109/iccke.2014.6993442
- Savur, C., Sahin, F. (2016). American Sign Language Recognition system by using surface EMG signal. 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2872–2877. doi: http://doi.org/10.1109/smc.2016.7844675
- Saha, S., Lahiri, R., Konar, A., Nagar, A. K. (2016). A novel approach to American sign language recognition using Madaline neural network. 2016 IEEE Symposium Series on Computational Intelligence (SSCI). doi: http://doi.org/10.1109/ssci.2016.7850121
- Kudubayeva, S. A., Ryumin, D. A., Kalzhanov, M. U. (2016). The method of basis vectors for recognition sign languageby using sensor KINECT, KazNU Bulletin. Mathematics, Mechanics, Computer Science Series, 3 (91), 86–96.
- Gupta, R., Bhatnagar, A. S. (2021). Multi-stage Indian sign language classification with Sensor Modality Assessment. 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS). doi: http://doi.org/10.1109/icaccs51430.2021.9441906
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Chingiz Kenshimov, Zholdas Buribayev, Yedilkhan Amirgaliyev, Aisulyu Ataniyazova, Askhat Aitimov
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.