Sign Language Recognition Using Deep Learning | IJCT Volume 13 – Issue 2 | IJCT-V13I2P68

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

C. Merlyne Sandra, R. Bhargav reddy, P. Mani Reddy, L. Vamsi

Abstract

Sign language is an important communication method for people with hearing and speech disabilities. However, many people do not understand sign language, which creates communication barriers. This project proposes a Sign Language Recognition system that can detect and interpret hand gestures using computer vision and machine learning techniques. The system captures hand movements through a camera and processes the images using image processing algorithms. A trained model identifies the gestures and converts them into text or speech. This helps deaf and mute individuals communicate more easily with others. The proposed system improves accuracy and speed in recognizing sign language gestures. It can be implemented using technologies such as Python, OpenCV, and deep learning models. The system aims to make communication more inclusive and accessible. Future improvements can include recognizing more gestures and supporting multiple languages.

Keywords

Sign Language Recognition, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Gesture Recognition, Computer Vision, Deep Learning, Assistive Communication Technology, Real-Time Gesture Translation, OpenCV, TensorFlow,Keras, Human- Computer Interaction, AI for Accessibility

Conclusion

The Sign Language Recognition system provides an effective solution for improvingcommunication between deaf or mute individuals and people who do not understand sign language. By using compu ter vision and machine learning techniques, the system can recognize hand gestures and convert them into text or speech. This technology helps reduce communication barriers and makes intera ction easier in everyday situations. The proposed system uses image processing, feature extraction, and deep learning models to accurately identify hand gestures. With the help of tools such as Python, OpenCV, and Convolutional Neural Networks, the system can perform real- time gesture recognition. These technologies improve the accuracy and efficiency of recogniz ing different sign language gestures. In the future, the system can be enhanced by incr easing the size of the dataset and supporting more complex gestures and multiple sign languages. Additional improvements such as better real-time performance and mobile application integration can make the system more practical for real-world use. Overall, the Sign Language Recognition system plays an important role in promoting inclusive communication and accessibility in society.

References

[1]S. Mitra and T. Acharya, “Gesture recognition: A survey,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 37, no. 3, pp. 311–324,2007. [2]R. Rastgoo, K. Kiani, and S. Escalera, “Hand sign language recognition using deep learning,” IEEE Access, vol. 8, pp. 12345–12356, 2020. [3]G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. [4]F. Chollet, Deep Learning with Python. Manning Publications, 2017. [5]T. Starner and A. Pentland, “Real-time American sign language recognition from video,” Proceedings of the IEEE International Symposium on Computer Vision, 1995.

How to Cite This Paper

C. Merlyne Sandra, R. Bhargav reddy, P. Mani Reddy, L. Vamsi (2026). Sign Language Recognition Using Deep Learning. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper