
Awareness of Voice Spoofing in Smart Assistance | IJCT Volume 12 – Issue 6 | IJCT-V12I6P2


International Journal of Computer Techniques
ISSN 2394-2231
Volume 12, Issue 6 | Published: November – December 2025
Author
Ayesha Chhagala , Nikhat Shaikh , Prof.Rubina Sheikh
Abstract
Voice assistants have become an important part of daily life, making tasks easier through voice commands. However, these systems are vulnerable to voice spoofing, where attackers use fake or recorded voices to trick smart assistants into granting unauthorised access or performing unintended actions. Raising awareness about this threat is essential to ensure users understand the risks and adopt safe practices. This paper highlights the ways voice spoofing can occur, the potential consequences, and the latest strategies users and developers can employ to recognise and prevent such attacks. By increasing awareness and promoting easy-to-follow protective measures, the safety of smart assistant technology can be significantly improved.
Keywords
voice spoofing, smart assistants, awareness, security, automatic speaker verification, speech synthesis, voice conversion, replay attack, user education, anti-spoofing.Conclusion
This research highlights that while smart assistants are increasingly integrated into daily life for tasks such as information retrieval, banking, shopping, and smart home control, significant concerns about security and trust persist among users. The survey results demonstrate that: A majority of users value brand reputation, visible compliance certifications, and convenient everyday functionality more than deep technical or regulatory understanding. Awareness of threats like voice spoofing and AI-generated deepfake voices remains limited, indicating the need for improved education and transparency from service providers. Many users have experienced misrecognition or errors with their smart assistants, impacting their confidence in voice authentication systems especially for sensitive actions like payments. There is strong user demand for multi-factor authentication and clearer privacy controls, with a substantial portion preferring additional security steps for peace of mind. Despite provider efforts, only a minority of users currently consider voice-based authentication safe enough for high-stakes transactions. Overall, trust in smart assistants is shaped by a combination of reliable performance, perceived security, transparent data handling, and established brand reputation, rather than detailed legal compliance or technical mechanisms alone. To move forward, providers and researchers should prioritize user education on evolving risks, continual enhancement of anti-spoofing technologies, and the development of user-centric privacy controls, ensuring that security advances align with actual user concerns and expectations. This approach will be essential for fostering greater trust, adoption, and resilience in the rapidly evolving ecosystem of voice-enabled smart assistants.
References
[1]Li, J. (2023). Security and privacy problems in voice assistant applications. ScienceDirect,1-4.
[2]Ahmed, M. E. (2020). How you can a-Void a voice spoofing attack. CSIRO Data61
[3]Rudnicky, A., Green, M. D. (2017). The Risks of Voice Technology. AFERM Resource Library.
[4]Sestek. (2024). Voice Technologies and Cybersecurity: Innovation Meets Protection.
[5]Capacity. (2025). Enhancing Security with Anti-Spoofing Technologies. [6]Wu, Z., Evans, N., Alegre, F., & Kinnunen, T. (2015). ASVspoof: The Automatic Speaker Verification Spoofing Challenge. Inter speech. [7]Shiota, S. (2016). Voice Liveness Detection for Speaker Verification. International Odyssey Speaker and Language Recognition Workshop,20–35 [8] Lakshminarayanan, V., et al. (2020). MagLive: Magnetic-based liveness detection on smartphones. ACM MobiSys. [9]Evans, N., Kinnunen, T., & Lee, K. A. (2020). Practical challenges in deploying voice spoofing countermeasures. Computer Speech & Language. [10] Kumar, R., & Sharma, S. (2023). Consumer Awareness and Security Challenges in Voice-Activated Devices. Journal of Cybersecurity and Privacy, 8(2), 112–124 [11]Patel, A., Joshi, M., & Singh, D. (2024). Enhancing User Trust through Transparency in Voice Biometric Systems. International Journal of Information Security, 15(4), 287–301. [12] Li, X., & Wang, Y. (2022). Trust and Reputation Models in Voice Biometric Authentication: A Survey. Journal of Network and Computer Applications, 178, 102981. [13]European Data Protection Board. (2023). Regulatory MeasuresforAnti-SpoofinginBiometricSystems.Official Journal of the European Union, L101, 45–58. [14]Lavrentyeva, G., Novoselov, S., Kozlov, A., Ganin, Y., & Karpov, A. (2017). Audio Replay Attack Detection with Deep Learning Frameworks. Interspeech.
[15]Kinnunen, T., Lee, K. A., & Sahidullah, M. (2020). ASVspoof 2019: A Large-Scale Public Database of Spoofed and Bona Fide Speech. Computer Speech & Language, 64, 101114.
[16]Todisco, M., Delgado, H., & Evans, N. (2017). Constant Q Cepstral Coefficients: A Spoofing Countermeasure for Automatic Speaker Verification. Speech Communication, 88, 44-54.
[17]Han, K., He, Y., Chen, L., & Li, H. (2019). Voice Liveness Detection Based on Doppler Shifts for Text-Independent Speaker Verification. IEEE Access, 7, 102287–102294.
[18]Wang, K., Zhang, C., & Wei, X. (2021). Multi-pattern Feature Based Spoofing Detection Using Modified ResNet Architectures. IEEE Access, 9, 131291-131304.
[19]Lavrentyeva, G., et al. (2019). Audio Replay Attack Detection Using Deep Convolutional Networks. IEEE Transactions on Information Forensics and Security, 14(5), 1311-1324.
[20]Desplanques, B., Watanabe, S., & Vincent, E. (2020). Ecapa-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification. INTERSPEECH.
Jain, A., Ross, A., & Nandakumar, K. (2011).
Journal Covers
IJCT Important Links
© 2025 International Journal of Computer Techniques (IJCT).









