Real-Time Sign Language Recognition for Inclusive Online Meeting Communication: A Machine Learning Approach
DOI:
https://doi.org/110.15662/IJEETR.2026.0802011Keywords:
Sign Language Recognition, Machine Learning, LSTM, Media Pipe, Online Meetings, Accessibility, TensorFlow.js, Human-Computer Interaction, Realtime ProcessingAbstract
Communication barriers in online meetings often exclude individuals with speech impairments who rely on sign language. This paper presents the development of a web-based online meeting application that enables effective communication between speech-impaired users and non-sign-language users by integrating machine learning based sign language recognition. The proposed system captures hand gestures through a webcam, processes them using Media Pipe Holistic for landmark extraction, and employs supervised learning algorithms - specifically Long Short-Term Memory (LSTM) and Dense layers for gesture classification. A Long Short-Term Memory (LSTM) neural network processes temporal sequences to recognize dynamic gestures in Realtime. The recognized gestures are converted into corresponding text displayed within the meeting interface, facilitating seamless interaction without requiring knowledge of sign language. The system integrates with web browsers using TensorFlow.js for client-side processing, ensuring privacy and low latency. This application promotes inclusivity and accessibility in virtual communication environments, particularly beneficial for educational, professional, and social online meeting scenarios, helping bridge the communication gap for speech-impaired individuals
References
World Health Organization, “Deafness and hearing loss,” WHO Fact Sheets, 2021.
[2] M. Faisal et al., “A Review of Real-Time Sign Language Recognition for Online Meetings and Virtual Communication,” IEEE Access, vol. 12, pp. 25467-25483, 2024.
[3] “Signcall: Bridging Communication Gaps in Virtual Meetings through AI-Powered Sign Language Recognition,” International Journal of Fundamental and Multidisciplinary Research, vol. 7, no. 2, pp. 1-8, 2025.
[4] Deaf Friendly Consulting, “How to Make Virtual Meetings More Inclusive for Deaf Participants,” 2025.
[5] M. Elmahgiubi, M. Ennajar, N. Drawil, and M. S. Elbuni, “Sign Language Translator and Gesture Recognition,” in Proc. IEEE Global Conference on Signal and Information Processing, pp. 995-999, 2015.
[6] R. Kumar et al., “Mediapipe and CNNs for Real-Time ASL Gesture Recognition,” arXiv preprint arXiv:2305.05296, 2023.
[7] AWS re:Invent, “Plug and Play with AI Sign Language Recognition,” Conference Presentation, 2023.
[8] P. C. Badhe and V. Kulkarni, “Indian Sign Language Translator Using Gesture Recognition Algorithm,” in Proc. IEEE Int. Conf. Computer Graphics, Vision and Information Security, pp. 195-200, 2015.
[9] J. Ahmad et al., “Dynamic Hand Gesture Recognition Using 3D-CNN and LSTM Networks,” Computers, Materials & Continua, vol. 70, no. 3, pp. 4675-4690, 2022.
[10] K. Ellis and J. C. Barca, “Exploring Sensor Gloves for Teaching Children Sign Language,” Advances in Human-Computer Interaction, vol. 2012, Article ID 924692, 8 pages, 2012.
[11] “Sign Language Translator Using Data Glove Approach,” IEEE Conference Publication, 2015.
[12] P. C. Badhe and V. Kulkarni, “Indian Sign Language Translator Using Gesture Recognition Algorithm,” in Proc. IEEE Int. Conf. CGVIS, pp. 195-200, 2015.
[13] P. S. Rajam and G. Balakrishnan, “Real Time Indian Sign Language Recognition System to Aid Deaf-Dumb People,” in Proc. 13th IEEE Int. Conf. Communication Technology, pp. 737-742, 2011.
[14] R. Janani et al., “Sign Language Translation,” in Proc. 6th Int. Conf. Advanced Computing and Communication Systems, pp. 883-886, 2020.
[15] Google MediaPipe, “MediaPipe Hands: On-device Real-time Hand Tracking,” Google AI Blog, 2020.
[16] F. Zhang et al., “MediaPipe Hands: On-device Real-time Hand Tracking,” arXiv preprint arXiv:2006.10214, 2020.
[17] S. Biswas et al., “MediaPipe with LSTM Architecture for Real-Time Hand Gesture Recognition,” in Proc. Int. Conf. Computer Vision and Image Processing, pp. 234- 245, 2023.
[18] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[19] S. Kashyap, S. Saxena, S. Gautam, and L. Singh, “Real-time Gesture Recognition System using Mediapipe and LSTM Neural Networks,” Journal of Computational Science, vol. 15, no. 3, pp. 421-435, 2025.
[20] J. Ahmad et al., “Dynamic Hand Gesture Recognition Using 3D-CNN and LSTM Networks,” Computers, Materials & Continua, vol. 70, no. 3, pp. 4675-4690, 2022.
[21] “Signcall: Bridging Communication Gaps in Virtual Meetings,” IJFMR, vol. 7, no. 2, 2025.
[22] R. Mapari and G. Kharat, “Hand Gesture Recognition Using Neural Network,” International Journal of Computer Science and Network, vol. 1, no. 4, pp. 25-30, 2012.
[23] “Comparing Classification Algorithms on Sign Language Recognition,” IJFANS, vol. 11, no. 3, pp. 6382-6391, 2022.
[24] V. N. T. Truong, C. K. Yang, and Q. V. Tran, “A Translator for American Sign Language to Text and Speech,” in Proc. IEEE 5th Global Conf. Consumer Electronics, pp. 492-493, 2016.
[25] Kumar et al., “Mediapipe and CNNs for Real-Time ASL Gesture Recognition,” arXiv preprint arXiv:2305.05296, 2023.
[26] D. Smilkov et al., “TensorFlow.js: Machine Learning for the Web and Beyond,” in Proc. Conference on Machine Learning and Systems, 2019.
[27] Atlantic.net, “How to Run Machine Learning Models in Your Browser with TensorFlow.js,” Technical Guide, 2025.
[28] V. S. F. Khan, “Unlocking Machine Learning in the Browser with TensorFlow.js,” Dev.to Technical Blog, 2024.
[29] Google TensorFlow, “TensorFlow.js Converter,” Documentation, 2024.
[30] C. Lugaresi et al., “MediaPipe: A Framework for Building Perception Pipelines,” arXiv preprint arXiv:1906.08172, 201





