Multimodal Automated Machine Learning Framework for Gynecological Cancer Prediction and Clinical Decision Support | IJCT Volume 13 – Issue 2 | IJCT-V13I2P67

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Sanjay S, Rishi Bala P

Abstract

Gynecological malignancies, particularly cervical and uterine endometrial cancers, collectively account for hundreds of thousands of cancer-related deaths annually worldwide, with late-stage diagnosis remaining a persistent barrier to effective clinical intervention. Existing computer-aided detection systems address these cancers through isolated, single-modality approaches that fail to capture the multidimensional nature of gynecological cancer risk spanning clinical, molecular, and imaging domains simultaneously. This paper presents GynoVision AI, a multimodal clinical decision support platform integrating four independent machine learning and deep learning prediction modules within a unified, explainable web-based system targeting both cervical and uterine cancer risk stratification. The system employs a calibrated LightGBM classifier for cervical cancer clinical risk prediction from twenty-seven behavioral and demographic risk factors, a fine-tuned ResNet-50 convolutional neural network for five-class Pap smear cytology cell-type classification, a class-weighted Logistic Regression model for uterine cancer clinical risk assessment from eighteen clinico-pathological features, and a novel dual-task architecture combining a Random Forest molecular subtype classifier with an XGBoost survival outcome predictor trained on The Cancer Genome Atlas uterine corpus endometrial carcinoma dataset. A multi-method explainability framework applies SHAP TreeExplainer across all structured data models and Grad-CAM for cytology image classification, ensuring every prediction is transparent and clinically interpretable. A rule-based clinical decision support engine maps predicted risk tiers to evidence-based recommendations including screening schedules, referral triggers, and biopsy guidelines. The system is deployed as a microservices architecture with four independent Flask REST API backends and a React TypeScript frontend. Experimental evaluation demonstrates strong predictive performance across all modules, with the cytology model achieving 91% accuracy and AUC-ROC of 0.97, and structured data models yielding AUC-ROC scores ranging from 0.83 to 0.93, collectively establishing GynoVision AI as a technically rigorous and clinically oriented multimodal decision support prototype for gynecological oncology research.

Keywords

Cervical Cancer, Uterine Cancer, Clinical Decision Support, LightGBM, ResNet-50, SHAP, Grad-CAM, Multimodal AI, Explainable Artificial Intelligence, Gynecological Oncology, TCGA, Molecular Subtyping, Microservices.

Conclusion

This paper has presented GynoVision AI, a multimodal explainable clinical decision support platform integrating four independent machine learning and deep learning prediction modules spanning clinical, imaging, and molecular data modalities for cervical and uterine endometrial cancers within a unified deployable web-based system. The platform successfully demonstrates that a calibrated LightGBM classifier achieves AUC-ROC of 0.93 for cervical cancer clinical risk prediction, a fine-tuned ResNet-50 convolutional neural network achieves 91% accuracy and AUC-ROC of 0.97 for five-class Pap smear cytology classification, a class-weighted Logistic Regression model achieves AUC-ROC of 0.91 for uterine cancer clinical risk stratification, and a novel dual-task Random Forest and XGBoost architecture simultaneously addresses molecular subtype classification and survival outcome prediction from a shared genomic input vector — all within a single integrated clinical platform. The multi-method explainability framework combining SHAP TreeExplainer, coefficient-based attribution, and Grad-CAM visual explanation across all four prediction modules represents a meaningful contribution to trustworthy clinical AI, demonstrating that comprehensive modality-appropriate explainability can be consistently delivered across diverse machine learning architectures within a unified system. The novel MSI_PC1 composite microsatellite instability feature, the robust SHAP fallback mechanism, the externally configurable risk stratification thresholds, and the evidence-based clinical decision support engine collectively advance the state of the art in multimodal gynecological cancer decision support beyond existing unimodal and non-explainable approaches. Clinical validation through prospective studies, extension to whole-slide cytology image analysis, multi-omics feature integration, and federated learning for privacy-preserving model training across distributed healthcare institutions represent the principal directions for future development toward production-grade clinical deployment of GynoVision AI.

References

[1]K. Fernandes, J. S. Cardoso, and J. Fernandes, “Transfer Learning with Partial Observability Applied to Cervical Cancer Screening,” in Proc. Iberian Conf. Pattern Recognition and Image Analysis (IbPRIA), Faro, Portugal, 2017, pp. 243–250. [2]M. E. Plissiti, P. Dimitrakopoulos, G. Sfikas, C. Nikou, O. Krikoni, and A. Charchanti, “SIPaKMeD: A New Dataset for Feature and Image Based Classification of Normal and Pathological Cervical Cells in Pap Smear Images,” in Proc. 25th IEEE Int. Conf. Image Processing (ICIP), Athens, Greece, 2018, pp. 3144–3148. [3]Cancer Genome Atlas Research Network, “Integrated Genomic Characterization of Endometrial Carcinoma,” Nature, vol. 497, no. 7447, pp. 67–73, May 2013. [4]K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778. [5]S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 30, Long Beach, CA, USA, 2017, pp. 4765–4774. [6]R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in Proc. IEEE Int. Conf. Computer Vision (ICCV), Venice, Italy, 2017, pp. 618–626. [7]T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 785–794. [8]G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu, “LightGBM: A Highly Efficient Gradient Boosting Decision Tree,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 30, Long Beach, CA, USA, 2017, pp. 3146–3154. [9]L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 2001. [10]B. Nithya and V. Ilakkiya, “Review on Machine Learning Methods for Cervical Cancer Detection and Classification,” Int. J. Scientific and Technology Research, vol. 8, no. 9, pp. 2570–2575, Sep. 2019. [11]F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians, vol. 68, no. 6, pp. 394–424, Nov. 2018. [12]E. Hussain, L. B. Mahanta, H. Borah, and C. R. Das, “Liquid Based-Cytology Pap Smear Dataset for Automated Multi-Class Diagnosis of Pre-Cancerous and Cervical Cancer Lesions,” Data in Brief, vol. 30, pp. 1–8, Jun. 2020. [13]P. Kaur, G. Sharma, and M. Mittal, “Explainable Artificial Intelligence and its Impact on Healthcare Decision Making,” Procedia Computer Science, vol. 204, pp. 749–757, 2022. [14]S. Akter, M. S. Islam, M. M. Rahman, and M. A. Hossain, “Prediction of Cervical Cancer from Behavior Risk Using Machine Learning Techniques,” in Proc. Int. Conf. Innovative Computing and Communications (ICICC), New Delhi, India, 2021, pp. 1–12. P. Mobadersany, S. Yousefi, M. Amgad, D. A. Gutman, J. S. Barnholtz-Sloan, J. E. Velázquez Vega, D. J. Brat, and L. A. D. Cooper, “Predicting Cancer Outcomes from Histology and Genomics Using Convolutional Networks,” Proc. National Academy of Sciences (PNAS), vol. 115, no. 13, pp. E2970–E2979, Mar. 2018.

How to Cite This Paper

Sanjay S, Rishi Bala P (2026). Multimodal Automated Machine Learning Framework for Gynecological Cancer Prediction and Clinical Decision Support. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper