
Exploring AI/ML Solutions to Deepfake Detection and Prevention | IJCT Volume 12 – Issue 6 | IJCT-V12I6P60

International Journal of Computer Techniques
ISSN 2394-2231
Volume 12, Issue 6 | Published: November – December 2025
Table of Contents
ToggleAuthor
Vidhilika Gupta, Paiker Fatima
Abstract
Deepfakes, which are incredibly realistic but fake photos and videos that jeopardize digital trust, security, and authenticity, have become more common as a result of the quick development of artificial intelligence and generative models. This dissertation examines four significant research contributions: Convolutional Neural Network (CNN)-based FaceForensics++ (Rossler 2019) [1], Capsule Networks (Sabir, 2020) [2], Vision Transformers (Wang et al., 2023) [3], and blockchain-based media verification (Lin, 2022) [4]. It presents a comparative case study on current AI-powered deepfake detection and prevention techniques based on five main performance metrics: Accuracy, generalization ability, computational efficiency, real-world adaptability, and scalability to evaluate each approach. The study ends by suggesting an integrated hybrid framework that combines blockchain-backed verification with AI-based detection.
Keywords
Deepfake detection, Deepfake prevention, Artificial intelligence, Capsule Networks, Vision Transformers, Blockchain, FaceForensics++, Media authenticity, CNN, Hybrid framework, Digital forensics, GAN
Conclusion
Even with some improvements in spotting and preventing deepfakes, the methods we have now still aren’t great at working under different situations, being fast, or holding up in the real world. Here are some things that could be done to improve them:
•Hybrid AI–Blockchain Framework
Using deep learning models (like CNNs, Capsule Networks, or Vision Transformers) along with blockchain to verify origin of content. This way, we can prove whether the content is authentic and spot fakes, building trust and tracking things better.
•Multi-Modal Detection Systems
Right now, most models just look at the pictures or videos. Adding sound, text, and even things like heartbeat or blinking patterns could help the models be right more often and spot different kinds of deepfakes.
•Lightweight and Efficient Models
If we can make the models use fewer resources by cutting out unnecessary parts or simplifying them, they can work faster and in real-time. That means we could use them on phones or other devices.
•Adversarial and Continual Learning
Training the models all the time and using methods to make them stronger against tricks will help them keep up with how deepfakes keep getting better.
•Standardized Benchmarks and Open Datasets
Having bigger, better sets of examples to test the models on—with real-world fakes, different quality levels, and samples from different sources—would help us see which models are really the best.
•Explainable AI (XAI) Integration
Adding tools that show why a model flagged something as fake would help people trust the system more. It would let experts see how the model is making its decisions.
•Policy and Cross-Platform Collaboration
Besides the tech stuff, social media companies, governments, and researchers should work together to set up rules for checking content and dealing with new deepfakes quickly.
References
[1]A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “FaceForensics++: Learning to detect manipulated facial images,” Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1–11, 2019.
[2]E. Sabir, J. Cheng, A. Jaiswal, W. AbdAlmageed, I. Masi, and P. Natarajan, “Recurrent convolutional strategies for face manipulation detection in videos,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 80–87, 2019.
[3]Y. Wang, J. Li, and Y. Qian, “Vision transformer-based deepfake detection for robust visual forensics,” IEEE Access, vol. 11, pp. 45789–45802, 2023.
[4]Z. Lin, H. Chen, and W. Huang, “Blockchain-based framework for multimedia content verification and deepfake prevention,” Journal of Information Security and Applications, vol. 68, p. 103250, 2022.
[5]Y. Mirsky and W. Lee, “The creation and detection of deepfakes: A survey,” ACM Computing Surveys, vol. 54, no. 1, pp. 1–41, 2021.
[6]L. Verdoliva, “Media forensics and deepfakes: An overview,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 910–932, 2020.
How to Cite This Paper
Vidhilika Gupta, Paiker Fatima (2025). Exploring AI/ML Solutions to Deepfake Detection and Prevention. International Journal of Computer Techniques, 12(6). ISSN: 2394-2231.
Related Posts:








