ABT Blog

Treasury's 230-Control AI Risk Framework: What Financial Institutions Need to Know

Written by Justin Kirsch | Wed, Mar 04, 2026

Treasury's 230-Control AI Risk Framework: What Financial Institutions Need to Know

The U.S. Treasury Department released its Financial Services AI Risk Management Framework (FS AI RMF) on February 19, 2026, giving banks, credit unions, and mortgage companies their first sector-specific operational playbook for AI risk. The framework maps 230 control objectives across seven risk domains and four adoption stages. It is voluntary but built to become the de facto standard that examiners, auditors, and regulators reference when evaluating how your institution manages AI. Here is what it covers, what to prioritize, and how to start implementing it.

What the Treasury AI Risk Framework Is (and Isn't)

The FS AI RMF was developed by the Artificial Intelligence Executive Oversight Group (AIEOG), a public-private partnership between Treasury's Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council (FSSCC), with execution by the Cyber Risk Institute (CRI). More than 100 financial institutions contributed to its development.

The framework is voluntary. Treasury has no examination authority over banks or credit unions. But if the FFIEC, OCC, FDIC, or NCUA incorporates these controls into examination procedures (as they did with the NIST Cybersecurity Framework), "voluntary" becomes "expected." The framework was released as part of President Trump's AI Action Plan (July 2025), which directed agencies to develop sector-specific AI risk management tools.

It consists of four components:

  • AI Adoption Stage Questionnaire -- A maturity-based self-assessment that places your institution into one of four adoption stages (from exploratory to enterprise-scale)
  • Risk and Control Matrix (RCM) -- The core engine: 230 control objectives that translate NIST AI RMF principles into operational, auditable controls
  • User Guidebook -- Implementation guidance on how controls should be operationalized across governance, technology, and process layers
  • Control Objective Reference Guide -- A 400+ page document with evidence examples for audit and supervisory review

What it is not: a replacement for existing regulatory guidance. It builds on SR 11-7 (model risk management), the FFIEC IT Examination Handbook, third-party risk management guidance (OCC 2023-17), and the NIST AI RMF (AI 100-1). Think of it as the financial services translation layer sitting between generic NIST principles and the specific operational realities of running AI in a regulated institution.

230
control objectives mapped across 7 risk domains and 4 adoption stages in the Treasury FS AI RMF
Source: U.S. Department of the Treasury, February 2026

The Seven Risk Domains Explained

The 230 controls are organized across seven risk domains. Each domain addresses a distinct category of AI risk that financial institutions face. Here is what each covers and why it matters.

1. Governance and Accountability

Controls in this domain establish board-level oversight, AI governance committees, risk appetite statements for AI, and clear lines of accountability. For banks already running risk committees, this extends existing governance structures to cover AI-specific decisions. For credit unions without dedicated risk committees, this is where the gap is widest.

2. Data Integrity and Management

Data lineage, quality controls, privacy-by-design principles, and data lifecycle management. The framework emphasizes upstream data governance, requiring institutions to document where AI training data originates, how it was cleaned, and who authorized its use. Machine unlearning capabilities (the ability to remove specific data from trained models) appear here as a forward-looking control.

3. Model Development and Validation

Controls covering model documentation, development standards, testing requirements, and independent validation. This domain directly maps to SR 11-7 model risk management expectations. Institutions already compliant with SR 11-7 will find significant overlap, but the framework adds AI-specific requirements around training data bias testing, explainability documentation, and continuous monitoring thresholds. This shift toward continuous oversight aligns with the broader industry realization that why annual compliance audits are no longer sufficient in the AI era.

4. Monitoring and Performance

Ongoing model performance tracking, drift detection, alert thresholds, and escalation procedures. The framework pushes institutions beyond point-in-time validation toward continuous monitoring integrated into MLOps pipelines. For community banks running a single AI-powered fraud detection tool, this might mean quarterly performance reviews. For large banks with hundreds of AI models, it means automated monitoring infrastructure.

5. Third-Party and Vendor Risk Management

AI vendor due diligence, contractual transparency requirements, audit rights, API security, and concentration risk. This domain matters because most financial institutions are not building AI internally. They are buying it from vendors like nCino, Finastra, Jack Henry, and dozens of fintech providers. The controls require institutions to understand how their vendors' AI models work, what data they process, and how they validate performance. It aligns with the OCC/FRB/FDIC Third-Party Risk Management Guidance (2023).

6. Fairness, Bias, and Consumer Protection

Bias testing, disparate impact analysis, explainability for consumer-facing decisions, and adverse action notice requirements. For any institution using AI in lending, underwriting, pricing, or customer segmentation, this domain is critical. It connects directly to ECOA, Fair Housing Act, and CFPB fair lending expectations. An AI model that produces unexplainable lending decisions is a fair lending violation waiting to happen.

7. Explainability and Transparency

Documentation standards for model decision-making, audit trail requirements, and stakeholder communication. This domain requires institutions to explain, in plain terms, how AI-driven decisions are made. Examiners will want to see documentation that demonstrates your institution understands what its AI models are doing, not just that they produce outputs.

Treasury AI Risk Framework: 7 risk domains, 230 control objectives across governance, data, models, monitoring, third-party risk, fairness, and explainability.
Why This Matters Right Now

The GAO reported in May 2025 that NCUA lacks two critical tools for overseeing credit unions' AI use: detailed model risk management guidance and the authority to examine technology service providers. Treasury's framework fills the guidance gap, but the enforcement gap remains. Credit unions should treat these 230 controls as the benchmark examiners will eventually use, even before NCUA formally adopts them.

Which Controls Matter Most for Banks and Credit Unions

Not all 230 controls carry equal weight. The framework itself is staged by adoption maturity, but within each stage, some controls address higher-risk AI use cases than others. Here is a prioritization framework based on how banking regulators currently assess AI risk.

Critical (Implement Now)

  • AI Inventory and Classification -- You cannot govern what you cannot see. Document every AI tool in use across the institution, including those employees adopted without IT approval. Classify each by risk level (consumer-facing lending decisions = high; internal scheduling tools = low).
  • Governance Committee Charter -- Establish or extend a governance committee with explicit AI oversight authority. Define risk appetite for AI deployment decisions.
  • Third-Party AI Due Diligence -- For every vendor providing AI capabilities, document what models they use, what data they process, and what controls they maintain. Require contractual transparency.
  • Fair Lending Bias Testing -- Any AI touching lending or underwriting decisions needs disparate impact testing before deployment and on a recurring schedule.
  • Data Privacy Controls -- Ensure AI systems handling customer data comply with GLBA, state privacy laws, and your institution's own privacy policies.

Important (Implement This Year)

  • Model Validation Procedures -- Extend SR 11-7 processes to cover AI-specific risks (training data bias, model drift, explainability gaps).
  • Continuous Monitoring Setup -- Move beyond annual model reviews to ongoing performance tracking with defined drift thresholds.
  • Incident Response for AI Failures -- Define what happens when an AI model produces incorrect outputs. Who gets notified? What is the rollback procedure?
  • Consumer Communication Standards -- Ensure adverse action notices and other required disclosures account for AI-driven decisions.

Foundational (Build Toward)

  • MLOps Pipeline Integration -- Embed controls into CI/CD pipelines rather than bolting them on after deployment.
  • Machine Unlearning Capabilities -- Prepare for data deletion requests that require removing specific records from trained models.
  • Automated Evidence Generation -- Build systems that automatically produce the documentation examiners need, rather than creating it manually before each exam.
97%
of surveyed organizations lacked adequate controls governing internal AI use
Source: AI Governance Survey, Caspian One, 2025

How This Framework Connects to Existing Regulatory Expectations

The framework does not exist in isolation. Financial institutions already operate under overlapping regulatory requirements that touch AI risk. The FS AI RMF maps to these existing expectations and, in many cases, extends them.

Existing RequirementTreasury Framework ConnectionGap Addressed
SR 11-7 (Model Risk Management)Domain 3: Model Development and ValidationAI-specific bias testing, continuous monitoring, explainability documentation
FFIEC IT Examination HandbookDomains 1, 4, 5: Governance, Monitoring, Third-Party RiskAI governance structures, automated monitoring, vendor AI due diligence
OCC 2023-17 (Third-Party Risk)Domain 5: Third-Party and Vendor RiskAI-specific vendor transparency, model audit rights, API security
ECOA / Fair Housing ActDomain 6: Fairness, Bias, and Consumer ProtectionSystematic bias testing frameworks, disparate impact documentation
GLBA / Privacy RequirementsDomain 2: Data Integrity and ManagementPrivacy-by-design for AI, data lineage, machine unlearning
BSA/AML (FinCEN)Domains 3, 4: Model Development, MonitoringValidation standards for AI-driven transaction monitoring and SAR filing

Institutions that have invested in SR 11-7 compliance, FFIEC examination readiness, and third-party risk management already have a foundation. The Treasury framework fills the AI-specific gaps in those existing programs. The work is incremental, not greenfield.

"The FS AI RMF provides practical tools and reference materials to help institutions evaluate AI use cases, manage risks across the AI lifecycle, and embed accountability, transparency, and resilience into AI deployment decisions."

U.S. Department of the Treasury, February 2026

The Shadow AI Problem: Why 230 Controls Won't Help If You Can't See Your AI

The entire framework rests on one assumption: that your institution knows what AI it is using. Most do not. Employees are using ChatGPT to draft loan narratives, Copilot to search internal documents, and free AI tools to summarize regulatory filings. None of this appears in your AI inventory because nobody asked permission.

Before you can implement 230 controls, you need to answer a more basic question: How many AI tools are running in your environment right now? If your compliance team cannot answer that with confidence, the framework's control objectives are theoretical.

The AI inventory is Control Objective #1 for a reason. It is the foundation everything else builds on. Without a complete picture of AI usage across your institution, you are governing a fraction of your actual risk.

For a deep dive into how unauthorized AI use creates compliance and security exposure in banking, read our companion article: Shadow AI in Banking: The Risk Your Compliance Team Can't See.

Implementation Roadmap: From Zero Controls to Compliant

The framework is staged by adoption maturity, but most community banks and credit unions need a time-based roadmap. Here is a four-quarter approach that maps to the framework's priorities.

Quarter 1: Foundation

  • Complete the AI Adoption Stage Questionnaire to establish your baseline
  • Conduct a full AI inventory across all departments (including shadow AI)
  • Establish or extend a governance committee with AI oversight authority
  • Identify and document the top 20 critical controls from the RCM that apply to your current AI use
  • Begin third-party AI due diligence for your highest-risk vendors

Quarter 2: High-Risk AI Controls

  • Conduct risk assessments for all AI applications classified as high-risk (lending, BSA/AML, customer-facing)
  • Implement fair lending bias testing for any AI-involved credit decisions
  • Extend SR 11-7 validation procedures to cover AI-specific risks
  • Deploy data governance controls for AI training data and customer PII
  • Establish continuous monitoring baselines for deployed AI models

Quarter 3: Expansion

  • Implement remaining controls from the RCM aligned to your adoption stage
  • Build incident response procedures for AI-specific failures
  • Conduct tabletop exercises for AI failure scenarios
  • Develop consumer communication standards for AI-influenced decisions

Quarter 4: Audit Readiness

  • Compile evidence artifacts using the Control Objective Reference Guide
  • Conduct internal audit of AI governance program against the full 230-control matrix
  • Prepare examination-ready documentation for each risk domain
  • Test automated evidence generation and monitoring alert workflows
Four-quarter implementation roadmap: Foundation, High-Risk Controls, Expansion, and Audit Readiness.

Starting From Scratch

  • No AI inventory exists
  • No governance committee covers AI
  • SR 11-7 does not address AI models
  • No AI-specific vendor due diligence
  • No bias testing in place
  • 12-18 month implementation timeline

Building on Existing Governance

  • Partial AI inventory from IT asset management
  • Existing risk committee can extend charter
  • SR 11-7 program covers some AI models
  • Third-party risk program covers AI vendors
  • Fair lending testing includes some AI models
  • 6-9 month implementation timeline

How the Framework Compares to Global AI Regulation

The Treasury FS AI RMF is the most control-detailed AI risk framework specifically built for financial services. Here is how it compares to the three other major regulatory approaches institutions should track.

EU AI Act

The EU AI Act entered into force in August 2024, with high-risk AI system obligations becoming fully applicable by August 2026. It classifies AI into risk tiers (unacceptable, high, limited, minimal) and imposes extensive requirements on high-risk systems, which includes most financial services AI. The European Banking Authority will begin AI Act implementation activities in the banking sector through 2026-2027. The EU AI Act is broader (all industries) but less operationally specific for financial services than Treasury's framework.

Singapore MAS Guidelines

Singapore's Monetary Authority (MAS) opened consultation on proposed AI risk management guidelines for financial institutions in November 2025. MAS requires institutions to maintain AI inventories, conduct risk materiality assessments, and establish governance structures. Singapore's approach is more principles-based and less prescriptive than Treasury's 230-control matrix, but its FI-specific focus makes it a closer comparison than the EU AI Act.

UK FCA Approach

The UK Financial Conduct Authority has explicitly chosen not to create AI-specific regulation. Instead, it applies existing frameworks (Consumer Duty, Senior Managers Regime) to AI use cases. The FCA and MAS announced a strategic AI partnership in November 2025 for joint testing and regulatory insight sharing. The UK's technology-neutral approach contrasts sharply with Treasury's control-specific framework. On November 12, 2025, the MAS and FCA formalized this collaborative approach.

For institutions operating internationally, these four frameworks create overlapping compliance obligations. The Treasury FS AI RMF can serve as the operational foundation, with supplemental controls mapped to EU, Singapore, or UK requirements as needed.

What ABT Recommends for Financial Institutions

We have helped 750+ financial institutions navigate regulatory shifts. From GLBA to SOX to FFIEC cybersecurity expectations to the current AI governance moment, the pattern is the same: frameworks start voluntary, become expected, and eventually become examination criteria. The institutions that implement early have smooth exams. The ones that wait have findings.

Here is where to start:

Take the AI Readiness Scan. Before mapping 230 controls, understand where your institution stands today. Our AI Readiness Scan evaluates your current governance posture, identifies gaps against emerging frameworks like the FS AI RMF, and produces a prioritized action plan.

Leverage your Microsoft 365 environment. Most financial institutions already run on Microsoft 365. Purview can provide AI audit trails. Entra ID and Conditional Access enforce identity controls around AI access. Intune manages the endpoints where AI tools run. The compliance infrastructure is already in your tenant. It just needs to be configured for AI governance.

Do not try to implement all 230 controls at once. The framework is staged for a reason. Start with the controls that map to your current AI use cases and your current adoption stage. A community bank running one AI-powered fraud detection tool does not need the same control set as a regional bank with 50 AI models in production.

ABT is the largest Tier-1 Microsoft CSP primarily dedicated to financial services. Our Guardian operating model wraps governance, monitoring, and compliance around your Microsoft tenant. As AI governance requirements expand, Guardian extends to cover AI-specific controls within the same managed environment. One relationship covering licensing, security, compliance, and now AI governance.

Related reading from our AI governance series:

How Ready Is Your Institution for 230 AI Controls?

The Treasury FS AI RMF sets a new baseline for AI governance in financial services. Our AI Readiness Scan maps your current posture against the framework's control objectives and produces a prioritized implementation plan.

Start Your AI Readiness Scan

Frequently Asked Questions

The Treasury FS AI RMF is a voluntary, sector-specific framework released in February 2026. Developed by over 100 financial institutions in partnership with Treasury and the Cyber Risk Institute, it contains 230 control objectives across seven risk domains. It translates the NIST AI Risk Management Framework into operational controls tailored for banks, credit unions, and financial services firms.

The framework contains 230 control objectives organized in a Risk and Control Matrix. These controls span seven risk domains: governance, data integrity, model development, monitoring, third-party risk, fairness and consumer protection, and explainability. Controls are categorized by AI adoption stage so institutions can scale implementation based on their maturity level.

The framework is currently voluntary. Treasury has no direct examination authority over financial institutions. However, the NIST Cybersecurity Framework followed the same pattern, starting voluntary then becoming an examination benchmark. Banking regulators such as the OCC, FDIC, and NCUA are expected to reference these controls in future examination procedures, making early adoption prudent.

The FS AI RMF translates the NIST AI Risk Management Framework (AI 100-1) into financial services-specific operational controls. While NIST provides generic, cross-industry AI risk principles, Treasury's framework adds 230 mapped control objectives tailored to banking regulations, consumer protection requirements, and financial services examination expectations that NIST does not address directly.

Start with five critical controls: a complete AI inventory and classification, a governance committee charter with AI authority, third-party AI vendor due diligence, fair lending bias testing for any AI in credit decisions, and data privacy controls for AI systems handling customer information. These address the highest-risk AI use cases examiners will evaluate first.

Yes. The framework was designed for all financial services institutions, including credit unions. Credit unions face a particular urgency because a May 2025 GAO report found that NCUA lacks both detailed AI model risk management guidance and the authority to examine credit union technology service providers. The Treasury framework fills the guidance gap that NCUA has not yet addressed.

The EU AI Act is a cross-industry law with mandatory compliance requirements and risk-tier classifications, fully applicable by August 2026. Treasury's framework is voluntary and specific to financial services. The EU approach is broader but less operationally detailed for banking. Treasury's 230 controls provide more granular implementation guidance for financial institutions than the EU AI Act's high-risk system requirements.

Justin Kirsch

CEO, Access Business Technologies

Justin Kirsch has helped financial institutions navigate new regulatory frameworks for over 25 years, from SOX and GLBA to FFIEC cybersecurity and now AI governance. As CEO of Access Business Technologies, he translates frameworks like Treasury's 230-control AI RMF into actionable implementation plans for banks and credit unions.