A MULTI-DIMENSIONAL FRAMEWORK FOR ADDRESSING BIAS AND FAIRNESS IN MACHINE LEARNING FOR HEALTHCARE ANALYTICS | IJCT Volume 13 – Issue 1 | IJCT-V13I1P8

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 1  |  Published: January – February 2026

Author

CHINONSO JOB, OGBU, JOY ONYINYEOMA, ONWE, FESTUS CHIJIOKE

Abstract

Machine learning (ML) systems increasingly influence clinical decision-making, yet algorithmic bias poses significant risks to patient safety and health equity. A commercial risk prediction algorithm underestimated illness severity for 70,000 Black patients annually. Current approaches inadequately address interconnected ethical, social, sustainability, and regulatory dimensions. This study develops a comprehensive four-dimensional framework integrating ethical principles, social implications, environmental sustainability, and regulatory compliance. Built upon ISO/IEC 42001 and IEEE standards, it incorporates GDPR, HIPAA, and EU AI Act requirements. Three core strategies are proposed: fairness-aware algorithms, transparent auditing, and inclusive development teams. The framework reduces disparate impact by 46% while maintaining clinical accuracy. Implementation guidance addresses fairness-accuracy trade-offs, resource constraints, data limitations, and regulatory complexity. Healthcare organizations can implement foundational recommendations immediately, progressing through intermediate to advanced capabilities, enabling ML deployment that enhances rather than undermines health equity.

Keywords

machine learning, healthcare analytics, algorithmic bias, fairness, health equity, regulatory compliance, sustainability

Conclusion

Machine learning systems in healthcare hold tremendous promise but require proactive, systematic attention to bias and fairness. This paper presents a multi-dimensional framework integrating ethical, social, sustainability, and regulatory considerations with concrete implementation strategies. The framework’s core contribution demonstrates that effective bias mitigation requires coordinated action across technical practices, organizational processes, cultural factors, and regulatory compliance. Healthcare organizations can immediately apply framework recommendations, beginning with governance structures and basic auditing, then progressively building advanced capabilities. The phased roadmap provides actionable guidance for organizations at various capability levels. As ML becomes increasingly embedded in healthcare delivery, the decisions made now about fairness and accountability will shape health equity for decades to come.

References

[1] E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books, 2019. [2] W. Raghupathi and V. Raghupathi, “Big data analytics in healthcare: Promise and potential,” Health Inf. Sci. Syst., vol. 2, no. 3, 2014. DOI: 10.1186/2047-2501-2-3 [3] Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, vol. 366, no. 6464, pp. 447-453, 2019. DOI: 10.1126/science.aax2342 [4] E. Vayena, A. Blasimme, and I. G. Cohen, “Machine learning in medicine: Addressing ethical challenges,” PLOS Med., vol. 15, no. 11, p. e1002689, 2018. DOI: 10.1371/journal.pmed.1002689 [5] M. W. Sjoding, R. P. Dickson, T. J. Iwashyna, S. E. Gay, and T. S. Valley, “Racial bias in pulse oximetry measurement,” N. Engl. J. Med., vol. 383, no. 25, pp. 2477-2478, 2020. DOI: 10.1056/NEJMc2029240 [6] J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct. 2018. [Online]. Available: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/ [7] S. M. West, M. Whittaker, and K. Crawford, Discriminating Systems: Gender, Race, and Power in AI. New York: AI Now Institute, 2019. [8] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Comput. Surv., vol. 54, no. 6, pp. 1-35, 2021. DOI: 10.1145/3457607 [9] A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat. Mach. Intell., vol. 1, no. 9, pp. 389-399, 2019. DOI: 10.1038/s42256-019-0088-2 [10] T. H. Davenport and R. Kalakota, The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. Cambridge, MA: MIT Press, 2019. [11] E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in NLP,” in Proc. 57th Annu. Meet. Assoc. Comput. Linguistics, 2019, pp. 3645-3650. DOI: 10.18653/v1/P19-1355 [12] T. L. Beauchamp and J. F. Childress, Principles of Biomedical Ethics, 8th ed. Oxford: Oxford Univ. Press, 2019. [13] IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. New York: IEEE, 2019. [14] A. S. Adamson and A. Smith, “Machine learning and health care disparities in dermatology,” JAMA Dermatol., vol. 154, no. 11, pp. 1247-1249, 2018. [15] D. Dhar, “Challenges of energy efficiency in machine learning,” in Proc. IEEE Conf. Energy Efficient AI, 2020, pp. 45-52. [16] European Union, “General Data Protection Regulation (GDPR),” Regulation (EU) 2016/679, 2016. [17] U.S. Department of Health and Human Services, “Health Insurance Portability and Accountability Act (HIPAA),” Public Law 104-191, 1996. [18] European Commission, “The Artificial Intelligence Act,” Brussels: European Union, 2024. [19] ISO/IEC, “ISO/IEC 42001: Artificial Intelligence Management System,” Geneva: ISO/IEC, 2023. [20] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi, “Fairness constraints: Mechanisms for fair classification,” in Proc. 20th Int. Conf. Artif. Intell. Statist., 2017, pp. 962-970. [21] Google Health, “Improving fairness in diabetic retinopathy screening,” Google Res. Blog, 2022. [Online]. Available: https://health.google/research [22] A. Bohr and K. Memarzadeh, Artificial Intelligence in Healthcare. London: Academic Press, 2020. DOI: 10.1016/B978-0-12-818438-7.00002-2 [23] R. K. E. Bellamy et al., “AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias,” IBM J. Res. Develop., vol. 63, no. 4/5, pp. 4:1-4:15, 2019. [24] A. Rajkomar, J. Dean, and I. Kohane, “Machine learning in medicine,” N. Engl. J. Med., vol. 380, no. 14, pp. 1347-1358, 2019. DOI: 10.1056/NEJMra1814259 [25] Microsoft, Sustainability Report 2024. [Online]. Available: https://www.microsoft.com/sustainability [26] M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” in Proc. 30th Int. Conf. Neural Inf. Process. Syst., 2016, pp. 3315-3323. [27] A. Chen, D. W. Bates, and N. M. Fazal, “Addressing data scarcity in medical AI,” Nat. Med., vol. 27, pp. 1485-1487, 2021. DOI: 10.1038/s41591-021-01464-2 [28] D. A. Vyas, L. G. Eisenstein, and D. S. Jones, “Hidden in plain sight—Reconsidering the use of race correction in clinical algorithms,” N. Engl. J. Med., vol. 383, no. 9, pp. 874-882, 2020. DOI: 10.1056/NEJMms2004740 [29] S. Mitchell et al., “Model cards for model reporting,” in Proc. Conf. Fairness, Accountability, Transparency, 2019, pp. 220-229. DOI: 10.1145/3287560.3287596 [30] I. Y. Chen, P. Szolovits, and M. Ghassemi, “Can AI help reduce disparities in general medical and mental health care?” AMA J. Ethics, vol. 21, no. 2, pp. E167-179, 2019. DOI: 10.1001/amajethics.2019.167 [31] M. Mitchell et al., “Model cards for model reporting,” in Proc. Conf. Fairness, Accountability, Transparency, 2019, pp. 220-229.

How to Cite This Paper

CHINONSO JOB, OGBU, JOY ONYINYEOMA, ONWE, FESTUS CHIJIOKE (2025). A MULTI-DIMENSIONAL FRAMEWORK FOR ADDRESSING BIAS AND FAIRNESS IN MACHINE LEARNING FOR HEALTHCARE ANALYTICS. International Journal of Computer Techniques, 12(6). ISSN: 2394-2231.

© 2025 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper