Responsible AI Governance in Banking: Integrating Model Risk, Consumer Protection, and Operational Resilience Across the United States, European Union, United Kingdom, Singapore, and Hong Kong | IJCT Volume 13 – Issue 2 | IJCT-V13I2P46

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Rajeew Vishvakarma

Abstract

Banks are embedding artificial intelligence (AI) into credit underwriting, fraud monitoring, trading, collections, customer service, and anti-financial-crime operations. The resulting risk profile is no longer captured adequately by a single governance lens. A bank AI system can simultaneously be a model-risk object, a consumer-impact mechanism, and a technology dependency whose failure may disrupt critical services. This paper develops an integrated Responsible AI governance framework for banks through a comparative documentary analysis of primary regulatory and supervisory materials across five influential jurisdictions: the United States, the European Union, the United Kingdom, Singapore, and Hong Kong. The analysis shows a convergence of supervisory expectations around three control families: lifecycle model governance, customer protection and explainability, and operational resilience with third-party accountability. Building on this convergence, the paper proposes an integrated control architecture covering governance bodies, AI inventory and use-case classification, risk tiering, validation, explainability, fairness review, deployment controls, incident response, vendor oversight, and ongoing monitoring. Illustrative case studies show that fragmented governance often misses failure modes even where no single regulation has yet been formally breached. The paper concludes with a phased implementation roadmap and argues that banks should govern AI as a cross-cutting enterprise capability rather than as a narrow model-development topic.

Keywords

Responsible AI; banking; model risk management; consumer protection; operational resilience; AI governance; AI Act; DORA; SR 11-7; FEAT; HKMA

Conclusion

AI governance in banking is no longer a niche question for data-science teams. It is an enterprise governance challenge with direct implications for safety and soundness, customer treatment, and continuity of critical services. The comparative analysis in this paper shows that regulators across the United States, European Union, United Kingdom, Singapore, and Hong Kong are approaching the problem through different legal pathways but with increasingly similar expectations. Banks are expected to know their AI use cases, classify risk, document and validate systems, maintain meaningful oversight, protect customers, manage third-party dependencies, and respond effectively when technology fails or behaves unexpectedly. The central argument of this paper is that these expectations are best met through an integrated Responsible AI governance framework. Such a framework should connect model risk management, consumer-protection controls, and operational resilience rather than leaving them in parallel silos. Doing so does not merely improve compliance posture. It creates better management information, clearer accountability, more resilient deployment, and greater institutional capacity to innovate responsibly. The institutions most likely to succeed will be those that govern AI as a cross-functional capability with board-level visibility, lifecycle discipline, customer-impact awareness, and operational resilience built in from the start.

References

●Board of Governors of the Federal Reserve System. (2011). SR 11-7: Guidance on Model Risk Management. ●Consumer Financial Protection Bureau. (2022). Consumer Financial Protection Circular 2022-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms. ●Consumer Financial Protection Bureau. (2023a). Chatbots in consumer finance. ●Department of Financial Services, State of New York. (2021). DFS issues findings on the Apple Card and its underwriter Goldman Sachs Bank. ●European Banking Authority. (2023). Follow-up report on machine learning for IRB models. ●European Union. (2022). Regulation (EU) 2022/2554 on digital operational resilience for the financial sector (DORA). ●European Union. (2024a). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). ●Financial Conduct Authority. (2021). PS21/3: Building operational resilience. ●Financial Conduct Authority. (2022). PS22/9: A new Consumer Duty. ●Financial Stability Board. (2024). The financial stability implications of artificial intelligence. ●Hong Kong Monetary Authority. (2019). Report on artificial intelligence application in banking. ●Hong Kong Monetary Authority. (2024). Generative artificial intelligence in the financial services space and sandbox arrangements. ●Monetary Authority of Singapore. (2018). Principles to promote fairness, ethics, accountability and transparency (FEAT) in the use of AI and data analytics. ●Monetary Authority of Singapore. (2023a). MAS-led industry consortium releases toolkit for responsible use of AI in the financial sector. ●Monetary Authority of Singapore. (2023b). MAS partners industry to develop generative AI risk framework for the financial sector. ●Monetary Authority of Singapore. (2024). Artificial intelligence model risk management. ●National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). ●Prudential Regulation Authority. (2021/2024). SS2/21 – Outsourcing and third party risk management.

How to Cite This Paper

Rajeew Vishvakarma (2026). Responsible AI Governance in Banking: Integrating Model Risk, Consumer Protection, and Operational Resilience Across the United States, European Union, United Kingdom, Singapore, and Hong Kong. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper