Responsible AI: Explainable Artificial Intelligence for Heart Disease Prediction using LIME | IJCT Volume 13 – Issue 2 | IJCT-V13I2P37

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Devansh Agarwal

Abstract

Artificial intelligence has increasingly been adopted in healthcare for predictive diagnosis and clinical decision support. However, many high-performing machine learning models operate as opaque “black-box” systems, limiting their transparency and raising concerns regarding trust, accountability, and responsible deployment in critical domains such as medicine. Responsible AI principles emphasize the importance of interpretability, fairness, and transparency to ensure that automated decision-making systems can be understood and validated by human experts. In this study, an explainable machine learning framework is proposed for heart disease prediction using clinical data from the UCI Heart Disease dataset. The dataset consists of 920 patient records and multiple clinical attributes related to cardiovascular health. Three classification algorithms: Logistic Regression, Random Forest, and Extreme Gradient Boosting (XGBoost) were implemented and evaluated using performance metrics including accuracy, ROC-AUC, and cross-validation. Experimental results indicate that the XGBoost model achieved the best predictive performance with an accuracy of approximately 85.3%. To enhance transparency and align with responsible AI practices, the Local Interpretable Model-Agnostic Explanations (LIME) technique was applied to generate interpretable explanations for individual predictions. Global feature importance and local LIME explanations were analyzed to identify clinically relevant attributes influencing heart disease prediction. The results demonstrate that integrating explainable AI methods with machine learning models improves transparency and supports the development of trustworthy AI-driven healthcare systems capable of assisting clinicians in informed decision-making.

Keywords

Explainable Artificial Intelligence, Responsible AI, Heart Disease Prediction, LIME

Conclusion

This study presented an explainable machine learning framework for predicting heart disease using clinical data from the UCI Heart Disease dataset. The proposed approach integrates multiple machine learning algorithms with explainable artificial intelligence techniques to provide both accurate predictions and interpretable insights into the model’s decision-making process. Three classification algorithms: Logistic Regression, Random Forest, and Extreme Gradient Boosting (XGBoost) were implemented and evaluated to determine the most effective predictive model. The experimental results demonstrated that the XGBoost model achieved the highest predictive accuracy of approximately 85.3%, outperforming the other models in overall classification performance. Additional evaluation using ROC-AUC and cross-validation confirmed the robustness and reliability of the predictive model. Beyond predictive accuracy, the study focused on improving transparency and interpretability through the application of explainable AI techniques. Global feature importance analysis identified key clinical attributes influencing heart disease prediction, including chest pain type, exercise-induced angina, and ST-segment related indicators. These features correspond to medically relevant indicators of cardiovascular conditions, demonstrating that the machine learning model captures clinically meaningful relationships within the dataset. To further enhance interpretability, the Local Interpretable Model-Agnostic Explanations (LIME) technique was used to generate patient-level explanations for individual predictions. LIME provided clear insights into how specific features influenced the model’s decision for a particular patient instance. The comparison between global feature importance and local LIME explanations revealed a strong alignment between overall model behavior and individual prediction reasoning, thereby increasing confidence in the reliability of the predictive system. The findings of this study demonstrate that integrating machine learning models with explainable AI techniques can significantly improve the transparency and trustworthiness of AI-based healthcare systems. Such systems have the potential to assist clinicians in diagnosing cardiovascular diseases and supporting data-driven clinical decision-making.

References

Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. https://doi.org/10.1145/2939672.2939785 Dua, D., & Graff, C. (2019). UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences. https://archive.ics.uci.edu Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778 Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. Agarwal, D., Logeswari, P. (2025). Explainable AI in Cancer Diagnosis: Enhancing Interpretability with SHAP on Benign and Malignant Tumor Detection. International Journal for Research in Applied Science and Engineering Technology. https://doi.org/10.22214/ijraset.2025.66580 Esteva, A., Robicquet, A., Ramsundar, B., et al. (2019). A guide to deep learning in healthcare. Nature Medicine, 25, 24–29. https://doi.org/10.1038/s41591-018-0316-z

How to Cite This Paper

Devansh Agarwal (2026). Responsible AI: Explainable Artificial Intelligence for Heart Disease Prediction using LIME. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper