Loading Now

Federated Learning with Transformers: Privacy-Preserving AI at Scale

International Journal of Computer Techniques – Volume 12 Issue 2

Srikanth Kamatala
Independent Researcher. Email: Kamatala.Srikanth@gmail.com

Abstract

Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models across decentralized devices while preserving data privacy. With the rise of transformer-based architectures in natural language processing and computer vision, integrating these models into federated learning frameworks presents unique challenges and opportunities. This paper explores the intersection of federated learning and transformer models, focusing on privacy-preserving techniques that enable secure, efficient training at scale. We propose a novel framework that addresses key challenges including memory constraints, communication overhead, and privacy-utility tradeoffs. Our experiments demonstrate that the proposed approach achieves comparable performance to centralized training while providing strong privacy guarantees, making it suitable for sensitive applications in healthcare, finance, and personal computing where data privacy is paramount.

Index Terms

Federated Learning, Transformer Models, Differential Privacy, Privacy-Preserving AI, Large Language Models, Distributed Computing.

Acknowledgments

The author would like to acknowledge the contributions of researchers and industry experts whose insights have shaped the discourse on Federated Learning with Transformers. This independent research does not refer to any specific institutions, infrastructure, or proprietary data.

References

  1. S. Hardy, M. Henecka, and H. Ivey-Law, “Federated learning for privacy-preserving ai,” Communications of the ACM, vol. 66, no. 10, pp. 50–57, 2023.
  2. H. B. McMahan, E. Moore, and D. Ramage, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017, pp. 1273–1282.
  3. P. K. Myakala and S. Kamatala, “Scalable decentralized multi-agent federated reinforcement learning: Challenges and advances,” International Journal of Electrical, Electronics and Computers, vol. 8, no. 6, 2023.
  4. A. Vaswani, N. Shazeer, and N. Parmar, “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  5. S. Kamatala, A. K. Jonnalagadda, and P. Naayini, “Transformers beyond NLP: Expanding horizons in machine learning,” Iconic Research And Engineering Journals, vol. 8, no. 7, 2025.
  6. J. Devlin, M.-W. Chang, and K. Lee, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  7. P. Mehendale, “Privacy-preserving AI through federated learning,” Journal of Scientific and Engineering Research, vol. 8, no. 3, pp. 249–254, 2021.
  8. C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  9. S. Y. Chen, J. Guo, “Federated learning with differential privacy via fast Fourier transform,” Scientific Reports, vol. 14, no. 1, pp. 1–15, 2024.
  10. Y. Zhu, J. Liu, M. Chowdhury, and F. Lai, “FedTrans: Efficient federated learning via multi-model transformation,” in Proceedings of Machine Learning and Systems, vol. 6, 2024, pp. 1–18.
  11. T. Chu, D. Isc¸ler, and N. Laoutaris, “Strengthening privacy in robust federated learning through secure aggregation,” in Proceedings of the 2024 Network and Distributed System Security Symposium, 2024.
  12. P. Inc., “Homomorphic encryption integrated with federated learning,” Protiviti, Tech. Rep., 2025.
  13. M. Alzantot, R. Al-Mallah, and M. El-Mallah, “An interactive framework for implementing privacy-preserving federated learning: Experiments on large language models,” arXiv preprint arXiv:2502.08008, 2025.
  14. L. R. Ali Abbasi Tadi, Dima Alhadidi, “Trustformer: A trusted federated transformer,” arXiv preprint, 2024.
  15. P. K. Myakala, “Consciousness in machines: A critical exploration,” International Journal of Multidisciplinary Research and Analysis, vol. 7, no. 12, 2024.
  16. C. Roux, M. Zimmer, and S. Pokutta, “On the Byzantine-resilience of distillation-based federated learning,” in International Conference on Learning Representations, 2025.
  17. Y. Kang, Y. Liu, and Q. Li, “Trading off privacy, utility and efficiency in federated learning,” arXiv preprint arXiv:2209.00230, 2023.
  18. A. K. Jonnalagadda and P. K. Myakala, “Addressing big data challenges with soft computing approaches,” in 6th International Conference on Artificial Intelligence and Big Data (AIBD 2025), vol. 15, no. 1, 2025.

How to Cite

Srikanth Kamatala, “Federated Learning with Transformers: Privacy-Preserving AI at Scale,” International Journal of Computer Techniques – Volume 12 Issue 2, Year. ISSN 2394-2231.

Share this content:

Post Comment