DRISHTI: AI-Powered Assistive Companion System for Visually Impaired Individuals for Daily Life Tasks | IJCT Volume 13 – Issue 3 | IJCT-V13I3P3

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Shristi Tripathi, Dr. Pawan Kumar Pandey, Mudit Dubey

Abstract

This paper presents the design and implementation of Drishti, an AI-powered assistive mobile application developed to enhance the mobility and independence of visually impaired individuals. Traditional assistive methods, such as white canes, provide limited environmental awareness and lack intelligent interaction capabilities, making navigation and daily activities challenging. To address these limitations, the proposed system is developed using the Flutter framework for cross-platform mobile application development and integrated with advanced technologies, including TensorFlow Lite for real-time object detection and speech processing, enabling voice-based interaction. Additionally, the system uses the Gemini API to deliver contextual, intelligent responses based on user input. The application uses a voice-driven interface, allowing users to interact with the system without relying on visual elements. The system provides key functionalities including environment detection, text recognition, and obstacle-aware navigation assistance. By processing real-time camera input and delivering audio feedback, Drishti enhances situational awareness and supports safer navigation. The implementation results demonstrate improved accessibility, reliable performance, and user convenience under real-world conditions. The system achieves an object detection accuracy of approximately 80–85% and voice recognition accuracy of 85–90%, making it a practical and effective solution compared to traditional assistive approaches.

Keywords

Assistive Technology, Artificial Intelligence, Computer Vision, Flutter, Object Detection, Voice Interaction, Accessibility, Navigation System, TensorFlow Lite, Mobile Application

Conclusion

The Drishti Assistive Navigation System offers a smart solution to help visually impaired individuals in digital and mobile environments. It replaces traditional assistive methods with an AI-driven approach that improves accessibility, efficiency, and user independence. By automating tasks like object detection, navigation guidance, and voice interaction, the system reduces manual effort and improves real-time decision-making. The system is user-friendly and can be easily set up on common smartphones, making it accessible to many users. Its voice-driven interface allows visually impaired individuals to interact with the system without needing visual input or advanced technical skills. The integration of various functions into a single platform improves usability and convenience. Additionally, the system boosts performance by providing real-time environmental awareness and structured interaction through smart processing. It enhances safety by detecting obstacles and guiding users while maintaining steady, reliable operation across different conditions. The modular design keeps the system organised, scalable, and ready for future upgrades. Overall, the Drishti system advances assistive technology by integrating artificial intelligence, computer vision, and mobile computing into a single solution. It offers a reliable, scalable, and cost-effective platform to improve the quality of life for visually impaired individuals and supports the larger goal of creating inclusive and accessible technology.

References

[1] Redmon J, Divvala S, Girshick R, Farhadi A. You Only Look Once: Unified, Real-Time Object Detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [2] Howard AG, Zhu M, Chen B, et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861, 2017. [3] Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In: Proceedings of CVPR, 2018. [4] Goodfellow I, Bengio Y, Courville A. Deep Learning. Cambridge: MIT Press, 2016. [5] Szeliski R. Computer Vision: Algorithms and Applications. Springer, 2010. [6] Pressman RS. Software Engineering: A Practitioner’s Approach. McGraw-Hill, 2005. [7] Sommerville I. Software Engineering. 9th ed. Pearson Education, 2011. [8] Google. TensorFlow Lite Documentation. Available: https://www.tensorflow.org/lite [9] Flutter. Flutter Documentation. Available: https://flutter.dev [10] Jurafsky D, Martin JH. Speech and Language Processing. Pearson, 2019. [11] Google AI. Gemini API Documentation. Available: https://ai.google.dev [12] OpenCV. OpenCV Documentation. Available: https://opencv.org [13] Android Developers. Speech Recognition and Text-to-Speech APIs. Available: https://developer.android.com [14] Lane ND, Bhattacharya S, Mathur A, et al. DeepX: A Software Accelerator for Low-power Deep Learning Inference on Mobile Devices. In: IPSN, 2016. [15] He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: CVPR, 2016. [16] Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2014. [17] Kameswaran SK, et al. Assistive Technologies for Visually Impaired: A Survey. IEEE Access, 2020. [18] Aladrén A, López-Nicolás G, Puig L, Guerrero JJ. Navigation Assistance for the Visually Impaired Using RGB-D Sensor. IEEE Systems Journal, 2016. [19] Patel PG, Shah SM. Survey on Data Security in Cloud Computing. International Journal of Engineering Research & Technology, 2012. [20] World Health Organization. World Report on Vision. WHO, 2019. [21] Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. [22] Zhang D, Li X, Chen H. AI-Based Smart Assistive Navigation Systems for Visually Impaired. IEEE Access, 2021. [23] Bousbia-Salah M, Redjimi A, Fezari M, Bettayeb M. An Indoor Navigation System for the Visually Impaired. IEEE Transactions, 2011. [24] Khan SS, et al. Mobile Vision-Based Assistive System for Blind Users. International Journal of Computer Applications, 2018.

How to Cite This Paper

Shristi Tripathi, Dr. Pawan Kumar Pandey, Mudit Dubey (2026). DRISHTI: AI-Powered Assistive Companion System for Visually Impaired Individuals for Daily Life Tasks. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper