##plugins.themes.bootstrap3.article.main##

Luubaatar Badarch Munkh-Erdene Ganbat Otgonbayar Altankhuyag Amartuvshin Togooch

Abstract

Sign language is a gesture-based manual used by people with hearing impairment and spoken language disorder to communicate with others. There is no universal sign language — the most used one is the American Sign Language. Mongolian Sign Language (MSL) has hand signs for letters of the alphabet, numbers, and other commonly used words. There are an estimated 16000 MSL signers. The lack of means to translate MSL into the Mongolian language, such as professional interpreters or translator applications, hinders MSL signers’ freedom of expression and political and public participation. Here, we created an MSL recognition system model that uses a camera to capture the letter symbols for the MSL alphabet and translates them into written Mongolian words. The proposed model uses two machine learning models that 1) recognize input, sorts, and filters, and 2) process Mongolian language. The model had an F1 score of 0.8678, given 51 distinct hand gestures. The natural language processing model that forms words had sufficient performance, though it can be improved in further works.

Download Statistics

Downloads

Download data is not yet available.

##plugins.themes.bootstrap3.article.details##

Keywords

Machine learning, Sign language translator, Image processing, Natural language processing, Mongolian sign language

References
[1] National Statistics Office of Mongolia, “Number of Persons with Speaking Disability, by Region, Location and Gender,” National Statistics Office. https://1212.mn/ (accessed Aug. 01, 2022).
[2] “Mongolian Sign Language,” Ethnologue. https://www.ethnologue.com/language/msr (accessed Aug. 01, 2022).
[3] “MASLI.” http://masli.mn/ (accessed Aug. 01, 2022).
[4] Asian Development Bank, “Living with Disability In Mongolia:: Progress Toward Inclusion,” Asian Development Bank, Manila, Philippines, Dec. 2019. doi: 10.22617/TCS190596-2.
[5] B. G. Lee and S. M. Lee, “Smart Wearable Hand Device for Sign Language Interpretation System with Sensors Fusion,” IEEE Sens. J., vol. 18, no. 3, pp. 1224–1232, Feb. 2018, doi: 10.1109/JSEN.2017.2779466.
[6] J. Wu, Z. Tian, L. Sun, L. Estevez, and R. Jafari, “Real-time American Sign Language Recognition using wrist-worn motion and surface EMG sensors,” Jun. 2015, pp. 1–6. doi: 10.1109/BSN.2015.7299393.
[7] “Leap Motion Controller | Ultraleap.” https://www.ultraleap.com/product/leap-motion-controller/ (accessed Aug. 01, 2022).
[8] C.-H. Chuan, E. Regina, and C. Guardino, “American Sign Language Recognition Using Leap Motion Sensor,” in 2014 13th International Conference on Machine Learning and Applications, Dec. 2014, pp. 541–544. doi: 10.1109/ICMLA.2014.110.
[9] D. Uebersax, J. Gall, M. Van den Bergh, and L. Van Gool, “Real-time sign language letter and word recognition from depth data,” in 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Nov. 2011, pp. 383–390. doi: 10.1109/ICCVW.2011.6130267.
[10] F. Zhang et al., “MediaPipe Hands: On-device Real-time Hand Tracking,” Jun. 2020. Accessed: Aug. 02, 2022. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200610214Z
[11] J. L. Raheja, A. Mishra, and A. Chaudhary, “Indian sign language recognition using SVM,” Pattern Recognit. Image Anal., vol. 26, no. 2, pp. 434–441, Apr. 2016, doi: 10.1134/S1054661816020164.
[12] M. Sharma, R. Pal, A. Kumar, and Sahoo, “Indian Sign Language Recognition Using Neural Networks and Knn Classifiers.”
[13] T. Starner and A. Pentland, “Real-Time American Sign Language Recognition from Video Using Hidden Markov Models,” in Motion-Based Recognition, M. Shah and R. Jain, Eds. Dordrecht: Springer Netherlands, 1997, pp. 227–243. doi: 10.1007/978-94-015-8935-2_10.
[14] C. Vogler and D. Metaxas, “Parallel hidden Markov models for American sign language recognition,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, Sep. 1999, vol. 1, pp. 116–122 vol.1. doi: 10.1109/ICCV.1999.791206.
[15] P. K. Athira, C. J. Sruthi, and A. Lijiya, “A Signer Independent Sign Language Recognition with Coarticulation Elimination from Live Videos: An Indian Scenario,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 3, pp. 771–781, Mar. 2022, doi: 10.1016/j.jksuci.2019.05.002.
[16] N. B. Ibrahim, M. M. Selim, and H. H. Zayed, “An Automatic Arabic Sign Language Recognition System (ArSLRS),” J. King Saud Univ. - Comput. Inf. Sci., vol. 30, no. 4, pp. 470–477, Oct. 2018, doi: 10.1016/j.jksuci.2017.09.007.
[17] T. Kim et al., “Lexicon-Free Fingerspelling Recognition from Video: Data, Models, and Signer Adaptation.” arXiv, Sep. 26, 2016. doi: 10.48550/arXiv.1609.07876.
[18] T. Starner, J. Weaver, and A. Pentland, “Real-time American sign language recognition using desk and wearable computer based video,” Pattern Anal. Mach. Intell. IEEE Trans. On, vol. 20, pp. 1371–1375, Jan. 1999, doi: 10.1109/34.735811.
[19] G. A. Rao and P. V. V. Kishore, “Selfie video based continuous Indian sign language recognition system,” Ain Shams Eng. J., vol. 9, no. 4, Art. no. 4, 2018, doi: 10.1016/j.asej.2016.10.013.
[20] S. Liwicki and M. Everingham, “Automatic recognition of fingerspelled words in British Sign Language,” in 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jun. 2009, pp. 50–57. doi: 10.1109/CVPRW.2009.5204291.
[21] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[22] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
[23] S. Hochreiter and J. Schmidhuber, “LSTM can Solve Hard Long Time Lag Problems,” in Advances in Neural Information Processing Systems, 1996, vol. 9.
[24] C. Dong, M. Leu, and Z. Yin, “American Sign Language alphabet recognition using Microsoft Kinect,” Jun. 2015, pp. 44–52. doi: 10.1109/CVPRW.2015.7301347.
[25] M. Kumar, P. Gupta, R. K. Jha, A. Bhatia, K. Jha, and B. K. Shah, “Sign Language Alphabet Recognition Using Convolution Neural Network,” in 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), May 2021, pp. 1859–1865. doi: 10.1109/ICICCS51141.2021.9432296.
Citation Format
How to Cite
Badarch, L., Ganbat, M.-E., Altankhuyag, O., & Togooch, A. (2022). Mongolian Sign Language Recognition Model . ICT Focus, 1(1), 1–9. https://doi.org/10.58873/sict.v1i1.27
Section
Articles

Similar Articles

You may also start an advanced similarity search for this article.