The Sign Language Interpreter System aims to help hearing-impaired individuals communicate easily with others by reducing the communication gap between sign language users and non-users. It uses computer vision and machine learning techniques to detect hand gestures through a camera and convert them into text or speech. The system recognizes finger positions and gesture patterns to identify the correct sign and provides real-time translation, making communication faster and more accessible. The translated output can also be generated in multiple languages, ensuring inclusivity for different users. This technology can be used in education, healthcare, and public services to improve communication. Overall, the system promotes inclusivity and enables better interaction for differently-abled individuals.
The system is implemented using technologies such as OpenCV and MediaPipe for gesture and facial feature detection, along with a Flask-based web application for user interaction. It also includes multilingual translation capabilities, allowing the output to be generated in different Indian languages. Additionally, speech synthesis is used to convert text into voice, enabling smooth communication with non-sign language users.
Keywords
^KEYWORDS^
Conclusion
The Sign Language Interpreter System using Machine Learning successfully bridges the communication gap between hearing and speech-impaired individuals and others. By combining computer vision and machine learning techniques, the system is able to recognize hand gestures in real time and convert them into meaningful text and speech. The use of MediaPipe and OpenCV ensures accurate detection of hand and facial features, while the Flask-based web application provides an easy-to-use and accessible platform. The inclusion of emotion detection and multilingual translation further enhances the system by making communication more expressive and inclusive. Overall, the system demonstrates good accuracy, fast response time, and reliable performance, making it suitable for real-time applications.
References
1.Nguyen, T., & Rao, S. (2025). Cross-Language Sign Interpretation Using Multimodal Transformers. IEEE Transactions on Affective Computing.
2.Banerjee, D., & Menon, S. (2024). Emotion-Aware Sign Language Translation Using MediaPipe and NLP. International Journal of Artificial Intelligence Research.
3.Patel, R., & Das, A. (2023). Vision-Based Indian Sign Language Recognition for Inclusive Communication. Springer Journal of Intelligent Systems.
4.Rajasekar, R., Balamurugan, G., & Dinesh, T. (2022). Multimodal Sign and Emotion Recognition Using MediaPipe Framework. Journal of Multimedia Tools and Applications.
5.Kumar, V., Sharma, A., & Nair, M. (2021). Indian Sign Language Interpreter Using CNN and OpenCV. International Journal of Emerging Technologies in Engineering Research.
6.OpenCV Documentation. (2024). Open Source Computer Vision Library. Available at: https://opencv.org
7.Google MediaPipe. (2024). MediaPipe Framework Documentation. Available at: https://mediapipe.dev
8.Flask Documentation. (2024). Flask Web Framework. Available at: https://flask.palletsprojects.com
Mozilla Developer Network (MDN). (2024). Web Speech API. Available at: https://developer.mozilla.org
How to Cite This Paper
K.Prashanth kumar, G.Santhosh Reddy, K.Karthik Reddy, MR.A.T.Barani Vijaya Kumar (2026). SIGN LANGUAGE INTERPRETER SYSTEM USING MACHINE LEARNING. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.