Sign Language Recognition System | IJCT Volume 13 – Issue 2 | IJCT-V13I2P65

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Ms.Ruby Angel, Akshaya GM, Nuha Zahra Fathima, Shankavi Ravichandran

Abstract

Communication between hearing-impaired individuals and the general population remains a significant challenge due to the limited understanding of sign language. This paper presents a real-time sign language recognition system that utilizes computer vision and deep learning techniques to interpret hand gestures and convert them into readable text. The proposed system captures live video input through a webcam, processes the hand region using image preprocessing techniques, and classifies gestures using a Convolutional Neural Network (CNN). The system is designed to ensure high accuracy, low latency, and efficient real-time performance. A structured dataset of hand gestures is used for training, and the model is optimized to handle variations in lighting conditions, hand orientation, and background noise. The system enables continuous gesture recognition and provides immediate textual output, thereby facilitating seamless communication. The proposed solution is scalable and can be extended to support voice output and sentence formation, making it suitable for real-world assistive applications. Experimental results demonstrate the effectiveness of the system in improving accessibility and reducing communication barriers.

Keywords

Sign Language Recognition, Computer Vision, Deep Learning, CNN, Real-Time Detection, Assistive Technology, Gesture Recognition

Conclusion

The proposed real-time sign language recognition system presents an effective and scalable solution to address the communication challenges faced by hearing and speech- impaired individuals. By integrating computer vision techniques with deep learning-based classification, the system successfully interprets hand gestures and converts them into readable text, enabling seamless interaction between users and the general population. A key contribution of this work lies in its ability to perform real-time gesture recognition with minimal latency while maintaining satisfactory accuracy. The use of Convolutional Neural Networks (CNNs) allows the system to automatically extract relevant features from input images, eliminating the need for manual feature engineering. Additionally, preprocessing techniques such as background subtraction, normalization, and noise reduction significantly enhance detection reliability under varying conditions. The modular architecture of the system ensures flexibility, scalability, and ease of integration with future technologies. Each component, including data acquisition, preprocessing, model classification, and output generation, operates efficiently while maintaining continuous data flow. This design approach enables the system to handle real-time input streams and ensures consistent performance across different operational scenarios. From a practical perspective, the system offers a user- friendly interface that allows individuals to communicate using simple hand gestures without requiring specialized hardware. The reliance on a standard webcam and software- based processing makes the solution cost-effective and accessible. This enhances its potential for deployment in real-world applications such as assistive communication tools, educational platforms, and smart interactive systems. The experimental results demonstrate that the system achieves reliable performance in recognizing gestures under controlled and moderately variable environments. The real- time feedback mechanism ensures smooth interaction, making the system suitable for everyday use. Although certain limitations exist, such as sensitivity to extreme lighting conditions and background complexity, the overall performance validates the feasibility of the proposed approach. Furthermore, the system contributes to the broader goal of developing inclusive technologies that support accessibility and equal opportunities. By reducing dependence on human interpreters and enabling independent communication, the proposed solution empowers individuals with hearing impairments and improves their quality of life. Future work can focus on enhancing the system by incorporating advanced deep learning models, expanding the dataset for improved generalization, and enabling dynamic gesture recognition for continuous sentence formation. Integration with speech synthesis systems can further extend functionality by converting text output into audio, creating a complete communication framework. In conclusion, the proposed sign language recognition system demonstrates the potential of combining computer vision and artificial intelligence to solve real-world problems. It provides a strong foundation for further research and development in assistive technologies and highlights the importance of leveraging modern computational techniques to build more inclusive and intelligent systems.

References

[1][1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. [2][2] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016. [3][3] OpenCV, “Open Source Computer Vision Library,” 2024. [Online]. Available: https://opencv.org/ [4][4] TensorFlow, “An End-to-End Open Source Machine Learning Platform,” 2024. [Online]. Available: https://www.tensorflow.org/ [5][5] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in Proc. International Conference on Learning Representations (ICLR), 2015. [6][6] D. Cireşan, U. Meier, and J. Schmidhuber, “Multi- column Deep Neural Networks for Image Classification,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [7][7] R. Rastgoo, K. Kiani, and S. Escalera, “Video-Based Isolated Hand Sign Language Recognition Using a Deep Learning Framework,” IEEE Access, vol. 8, pp. 191895– 191906, 2020. [8][8] S. Ong and S. Ranganath, “Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 873–891, 2005. [9][9] M. Koller, O. Zargaran, H. Ney, and R. Bowden, “Deep Sign: Hybrid CNN-HMM for Continuous Sign Language Recognition,” in Proc. British Machine Vision Conference (BMVC), 2016. [10][10] J. Redmon et al., “You Only Look Once: Unified, Real-Time Object Detection,” in Proc. IEEE CVPR, 2016. [11][11] A. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv preprint arXiv:1704.04861, 2017. [12] World Health Organization, “World Report on Disability,” WHO Press, 2023

How to Cite This Paper

Ms.Ruby Angel, Akshaya GM, Nuha Zahra Fathima, Shankavi Ravichandran (2026). Sign Language Recognition System. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper