AIR-RC-022
AI Governance Framework Icon

Regulatory Compliance and Oversight

Edit on GitHub

Summary

AI systems in financial services must comply with the same regulatory standards as human-driven processes, including those related to suitability, fairness, record-keeping, and marketing conduct. Failure to supervise or govern AI tools properly can lead to non-compliance, particularly in areas like financial advice, credit decisions, or trading. As regulations evolve—such as the upcoming EU AI Act—firms face increasing obligations to ensure AI transparency, accountability, and risk management, with non-compliance carrying potential fines or legal consequences.

Description

The financial services sector is subject to extensive regulatory oversight, and the use of artificial intelligence does not exempt firms from these obligations. Regulators across jurisdictions have made it clear that AI-generated content and decisions must comply with the same standards as those made by human professionals. Whether AI is used for advice, marketing, decision-making, or communication, firms remain fully accountable for ensuring regulatory compliance.

Key regulatory obligations apply directly to AI-generated outputs:

  • Financial Advice: Subject to KYC, suitability assessments, and accuracy requirements (MiFID II, SEC regulations)
  • Marketing Communications: Must be fair, clear, accurate, and not misleading per consumer protection laws
  • Record-Keeping: AI interactions, recommendations, and outputs must be retained per MiFID II, SEC Rule 17a-4, and FINRA guidelines

Beyond the application of existing rules, financial regulators have published AI-specific governance, risk management, and validation expectations. The relevant regimes have diverged materially across jurisdictions and practitioners need to map their systems to the correct frame:

  • Model Risk Management (UK and EU): In the UK, the PRA’s SS1/23 treats in-scope AI systems — including generative and agentic AI — as falling within model risk management, with the four pillars of development, validation, governance, and ongoing monitoring applying. The EBA’s guidelines on the use of ML for AML/CFT remain in force in the EU. AI models informing critical decisions such as credit underwriting, capital adequacy calculations, algorithmic trading, fraud detection, and AML/CFT monitoring are subject to rigorous model governance, requiring comprehensive validation, ongoing performance monitoring, clear documentation, and effective human oversight.
  • Model Risk Management (US): As of 17 April 2026, the OCC, Federal Reserve and FDIC jointly revised their interagency MRM guidance (SR 11-7 / OCC Bulletin 2011-12, reissued as OCC Bulletin 2026-13) to explicitly exclude generative and agentic AI from scope. The same package rescinded OCC Bulletin 1997-24 (credit scoring) and the 2021 interagency statement on MRM for BSA/AML, and clarified that the guidance is “most relevant” to banks above approximately $30bn in assets. SR 11-7 continues to apply to traditional quantitative models (VaR, IRB PD, logistic regression credit scoring, and similar). It no longer applies to GenAI or agentic systems, pending a forthcoming Request for Information on what should replace MRM coverage in this area.
  • Supervision and Accountability: Firms bear the responsibility for adequately supervising their AI systems regardless of which regulatory peg is in scope. A failure to implement effective oversight mechanisms, define clear lines of accountability for AI-driven decisions, and ensure that staff understand the capabilities and limitations of these systems can lead directly to non-compliance.

The US carve-out does not create an unregulated zone for GenAI or agentic systems. It removes the MRM peg, but the underlying obligations have shifted to a wider set of authorities: fair-lending law (ECOA/Reg B), the FCRA adverse-action regime, third-party risk management expectations under FFIEC, the SEC’s anti-fraud authority over AI-related disclosures, NYDFS Part 500, and state-level AI legislation such as the Colorado AI Act and California DFPI activity. Firms running parallel UK and US operations should expect transatlantic divergence to widen until the US RFI process concludes.

The regulatory landscape is also evolving. New legislation such as the EU AI Act classifies certain financial AI applications (e.g., credit scoring, fraud detection) as high-risk, which will impose additional obligations related to transparency, fairness, robustness, and human oversight. For high-risk AI systems, Article 27 of the EU AI Act requires deployers (including financial institutions) to conduct Fundamental Rights Impact Assessments before deployment, evaluating potential impacts on individuals’ rights and freedoms. Firms that fail to adequately supervise and document their AI systems risk not only operational failure but also regulatory fines, restrictions, or legal action.

Responsible AI considerations—such as fairness, transparency, accountability, and human oversight—are increasingly codified in regulation rather than remaining solely ethical aspirations. Financial institutions should address these concerns to the extent required by applicable regulations and supervisory expectations in their jurisdictions.

As regulatory expectations grow, firms must ensure that their deployment of AI aligns with existing rules while preparing for future compliance obligations. Proactive governance, auditability, and cross-functional collaboration between compliance, technology, and legal teams are essential.