Mr. Ram Kumar Sharma – Assistant Professor, Dept. of IT, NIET, Greater Noida, India. ramsharma533@gmail.com
Abstract
The **video-based sign language recognition system** processes **real-time video input** using **computer vision and deep learning models** to identify **Indian Sign Language (ISL) gestures** and convert them into **speech output**. This system bridges the **communication gap for the hearing-impaired**, offering **inclusivity and accessibility** in **social interactions and professional environments**.
Keywords
Sign Language Recognition, AI in Accessibility, Computer Vision, Deep Learning, Indian Sign Language, Assistive Communication, Text-to-Speech, ISL to Speech.
Conclusion
The **AI-driven ISL recognition system** significantly improves **communication accessibility** by seamlessly **converting gestures into speech** using **computer vision and deep learning models**. Future advancements may focus on **expanding the gesture database, optimizing real-time recognition speed, and integrating multilingual support for global accessibility**.
References
Kumar, A., Gupta, R., & Jain, P. (2021). “Sign Language Recognition Using CNN.” IEEE Access.
Kaur, M., & Singh, S. (2020). “Indian Sign Language Detection Using CNN and OpenCV.” International Journal of Computer Applications.
MediaPipe Framework by Google. https://google.github.io/mediapipe/
Post Comment