International Journal of Computer Techniques Volume 12 Issue 3 | Sign2Text: Bridging communication for Deaf and Non-Speaking Individuals

Sign2Text: Bridging Communication for Deaf and Non-Speaking Individuals

Sign2Text: Bridging Communication for Deaf and Non-Speaking Individuals

Authors: Deepali Kumari, Ram Kumar Sharma

Abstract

Sign language enables communication for individuals with speech impairments. Our system integrates CNN-RNN deep learning models with HMM-based speech synthesis to convert sign gestures into text and speech, enhancing accessibility.

Keywords

HMM, ASL, BSL, CNN, RNN, Gesture recognition, Deep learning, Text-to-Speech

Conclusion

The proposed system successfully recognizes sign gestures and translates them into speech. Future work includes improving real-time accuracy using Transformer models and refining dataset annotation techniques.

References

1. L.O. Chua, “CNN: A Vision of Complexity,” Int. J. Bifurcation Chaos, vol. 7, no. 10, 1997.

2. S. Mascarenhas, M. Agarwal, “Comparison between VGG16, VGG19, and ResNet50 architectures,” CENTCON 2021.

Post Comment

Submit Your Paper