In This Article
- What Happened: FHFA Terminates Anthropic AI Contract
- Why Financial Institutions Should Pay Attention
- The AI Governance Gap Across Financial Services
- The AI Vendor Risk You Are Already Carrying
- Third-Party AI Risk Assessment: A Practical Framework
- Contract Provisions Every Financial Institution Needs for AI Vendors
- Building AI Vendor Resilience
- Frequently Asked Questions
When the Federal Housing Finance Agency terminated all use of Anthropic AI products on March 2, 2026, the ripple effect reached far beyond the mortgage GSEs. Banks running AI-powered fraud detection through third-party platforms. Credit unions using vendor-embedded chatbots for member service. Mortgage companies relying on AI document classification inside their LOS. Every financial institution that depends on vendor AI -- which is nearly all of them -- suddenly had a case study in how fast an AI vendor relationship can unravel for reasons that have nothing to do with technology performance.
FHFA Director William Pulte confirmed that Fannie Mae and Freddie Mac would cease using Anthropic's Claude platform immediately. The action followed President Trump's directive ordering federal agencies to cut ties with Anthropic after the company refused to remove safety restrictions on its AI for Pentagon use. For CISOs and CIOs at banks, credit unions, and mortgage companies, this is a wake-up call about a risk category that most vendor management programs do not adequately address: upstream AI vendor disruption driven by political, regulatory, or geopolitical forces.
What Happened: FHFA Terminates Anthropic AI Contract
The termination started with a dispute between Anthropic and the Pentagon. The Department of Defense wanted to use Anthropic's Claude AI for all lawful purposes, including defense and intelligence operations. Anthropic drew two lines: Claude would not be used for autonomous weapons systems and would not be used for mass surveillance of American citizens. CEO Dario Amodei stated publicly that threats would not change their position.
Defense Secretary Pete Hegseth responded by designating Anthropic as a supply-chain risk to national security. President Trump then ordered all federal agencies to terminate Anthropic contracts. Within days, Treasury, FHFA, HHS, and the State Department began shedding their Anthropic relationships. Multiple agencies announced they would transition to OpenAI as an alternative.
FHFA's action carries unique weight for the financial services industry. Unlike Treasury or HHS, FHFA directly regulates Fannie Mae and Freddie Mac -- the two entities that define secondary market access for mortgage lenders and shape risk expectations that OCC, FDIC, and NCUA examiners watch closely. When Director Pulte extended the termination to include both GSEs, the signal reached every corner of financial services: a federal regulator with direct authority over financial institutions considers this AI vendor unacceptable.
The government has a six-month runway to complete the phase-out. Anthropic has stated it will challenge the supply-chain risk designation in court.
This is happening at the same time Freddie Mac Bulletin 2025-16 mandates AI governance for all seller/servicers (effective March 3, 2026), the OCC is increasing scrutiny of bank AI deployments, and NCUA examiners are asking credit unions about their technology service provider oversight. The same regulatory ecosystem that now requires you to govern your AI is simultaneously demonstrating that AI vendor relationships can be disrupted overnight by forces entirely outside your control. If your compliance, operations, or technology stack depends on a single AI vendor, the FHFA-Anthropic episode is a warning you cannot ignore. See our complete breakdown: Freddie Mac AI Mandate Compliance Checklist.
AI Readiness Starts with Your M365 Tenant
The gap between AI ambition and AI governance is where breaches happen. ABT's scan evaluates your readiness across all four phases of the AI journey.
Why Financial Institutions Should Pay Attention
The FHFA-Anthropic termination exposes a blind spot in how banks, credit unions, and mortgage companies manage vendor risk. Traditional third-party risk management focuses on operational stability, data security, and financial viability. Will the vendor stay in business? Will they protect customer data? Can they meet uptime requirements? Those questions still matter. But the Anthropic episode introduces a category that most vendor risk frameworks do not address: political and geopolitical disruption risk.
For banks, the OCC and FDIC already expect robust third-party risk management programs under OCC Bulletin 2023-17 and FDIC FIL-2023-29. But those programs were designed for traditional technology vendors, not for AI providers whose regulatory standing can change overnight based on national security policy. For credit unions, the challenge is compounded by limited staff -- most credit unions manage vendor oversight with two or fewer dedicated employees. For mortgage companies, the GSE connection is direct: when FHFA signals concern about an AI vendor, that concern flows through Fannie Mae and Freddie Mac selling and servicing requirements straight into your compliance obligations.
Consider the chain of events. Anthropic refused to relax AI safety restrictions. The Pentagon labeled them a supply-chain risk. The President ordered agencies to cut ties. FHFA extended that to the GSEs. If your core banking platform, credit union service organization, document AI provider, or LOS vendor uses Anthropic's models under the hood, you now have a vendor-within-a-vendor risk that sits squarely within your AI risk management framework.
The AI Governance Gap Across Financial Services
The disconnect between AI adoption and AI governance in financial services is stark. Seven out of ten financial services firms are formally using AI, but fewer than one in four have policies that specifically address third-party AI risk. That gap is where incidents like the FHFA-Anthropic termination cause the most damage -- institutions that have embedded vendor AI into critical workflows without the governance framework to manage a sudden disruption.
This is not a small-institution problem. Only 12% of Chief Risk Officers across financial services describe their organization's AI governance as "highly developed," according to the ProSight and Oliver Wyman 2026 CRO Outlook Survey. The rest are operating with partial frameworks, informal guidelines, or nothing at all. When your AI vendor's regulatory status changes overnight, the difference between "highly developed" governance and "we are working on it" governance is the difference between an orderly transition and operational chaos.
The governance gap shows up differently across institution types:
- Banks: OCC examiners are increasingly asking about AI vendor oversight during examinations, but many community and regional banks have not updated their vendor management programs to address AI-specific risks like model provenance, upstream dependencies, and concentration risk
- Credit unions: GAO found that NCUA lacks authority to directly examine credit union technology service providers (GAO-25-107197, May 2025), creating a regulatory blind spot where AI vendors operate without direct examiner scrutiny. Credit unions must fill this gap through their own due diligence
- Mortgage companies: Freddie Mac Bulletin 2025-16 now mandates AI governance for seller/servicers, but most companies are still building the vendor assessment component of their programs
The AI Vendor Risk You Are Already Carrying
Most financial institutions use AI through their existing technology vendors without fully mapping where that AI comes from, how it works, or what would happen if it disappeared. The AI is not always labeled as AI. It shows up as "intelligent automation," "smart workflows," or "advanced analytics" inside vendor platforms you have used for years.
Here is where AI is likely embedded in your technology stack right now:
- Core Banking and Lending Platforms: Core processors like FIS, Fiserv, and Jack Henry embed AI for transaction monitoring, risk scoring, and anomaly detection. Loan origination systems including ICE Mortgage Technology's Encompass use AI for document classification, data extraction, and automated condition generation
- Document Processing and Compliance: Vendors like Ocrolus use AI to classify over 1,600 financial document types and pre-populate calculations. BSA/AML platforms use machine learning to flag suspicious activity and reduce false positives
- Fraud Detection: Pattern recognition models that flag suspicious transactions, account takeover attempts, document anomalies, and identity verification concerns across all institution types
- Member and Customer Service: Chatbots, virtual assistants, and AI-powered call routing used by banks and credit unions for digital banking, and by mortgage servicers for borrower communication
- Credit Decisioning: AI-assisted underwriting, credit scoring overlays, and automated pre-qualification engines that supplement traditional models
- Cybersecurity: AI-powered endpoint detection, email security, and threat intelligence platforms -- often running foundation models from the same providers now under regulatory scrutiny
- Regulatory Reporting: Automated HMDA, CRA, and Call Report preparation tools that use AI to validate data, identify errors, and suggest corrections
The question is not whether you use vendor AI. You almost certainly do. The question is whether you know which AI models power each of these functions, who provides them, and what your contingency plan is if one of those vendor relationships changes.
Third-Party AI Risk Assessment: A Practical Framework
If your institution does not have a formal AI vendor risk assessment process, build one now. The FHFA-Anthropic situation demonstrates that standard vendor due diligence is not sufficient for AI. You need to ask questions that go beyond uptime SLAs and SOC 2 reports. The broader pattern of automation risks across financial services is explored in our analysis of the hidden risks in financial services automation.
Key Questions for Every AI Vendor
- Where does the vendor's AI model come from? Does your vendor build its own models, license them from a foundation model provider (OpenAI, Anthropic, Google, Meta), or use open-source models? If they license from a third party, your vendor risk assessment must account for the upstream provider
- How is the model trained? What data was used to train the model? Does the model train on your data? How is model performance validated for your specific financial services use cases -- whether that is BSA/AML monitoring at a bank, member lending at a credit union, or document classification at a mortgage company?
- What data does the AI access? Does the model process customer PII, credit data, member information, or financial documents? Where is that data stored and processed? Does data leave your environment?
- What happens if the vendor loses its AI capabilities? If the vendor's upstream AI provider is disrupted (as happened with Anthropic), does your vendor have a contingency plan? Can they switch to an alternative model without disrupting your operations?
- What is the exit strategy? Can you move to a different vendor without losing data, re-training models, or rebuilding integrations? What is the realistic timeline and cost for a vendor transition?
These questions align with the Interagency Guidance on Third-Party Relationships (OCC Bulletin 2023-17, FDIC FIL-2023-29), which establishes principles for managing third-party vendor risk that apply equally to banks, credit unions operating under NCUA guidance, and non-depository mortgage lenders. The guidance applies to AI vendors as much as it applies to any other technology relationship.
"As organizations deepen partnerships with major cloud and AI providers, regulators and executives are increasingly focused on concentration risk, the concern that reliance on a relatively small number of technology providers might create critical business vulnerabilities."
Microsoft Industry Blog, February 2026Contract Provisions Every Financial Institution Needs for AI Vendors
Your vendor contracts may need updating. Standard technology service agreements often do not address AI-specific risks. Based on regulatory guidance and the lessons of the FHFA-Anthropic termination, here are the provisions banks, credit unions, and mortgage companies should require in AI vendor contracts.
AI Model Transparency
- Require vendors to disclose which AI models their products use and identify any upstream model providers
- Require notification when the vendor changes the underlying AI model, training data, or model architecture
- Specify that the vendor must disclose any AI components added to existing products, not just purpose-built AI features
Audit and Testing Rights
- Retain the right to audit AI model performance, bias testing results, and validation documentation
- Require the vendor to provide model performance data relevant to your institution's specific use cases on a scheduled basis
- Include the right to conduct independent testing of AI outputs for fair lending, BSA/AML accuracy, and other compliance-sensitive functions
Data Handling Requirements
- Specify that customer and member data processed by vendor AI must not be used for model training without explicit consent
- Define data residency requirements for AI processing
- Require data portability provisions so your data can be extracted if the vendor relationship ends
Change Notification and Contingency
- Require advance notification of any material changes to AI functionality, model providers, or data handling practices
- Define what constitutes a material change that triggers notification
- Require the vendor to maintain a documented contingency plan if their upstream AI provider becomes unavailable
Regulatory Compliance Obligations
- Require the vendor to support your compliance with applicable regulatory requirements -- whether OCC guidance for banks, NCUA expectations for credit unions, or Freddie Mac Section 1302.8 for mortgage seller/servicers
- Require the vendor to cooperate with examiner requests related to AI use, regardless of your primary regulator
- Include termination provisions if the vendor cannot demonstrate compliance with applicable regulatory requirements
Building AI Vendor Resilience
Risk assessment and contract provisions are defensive measures. Building genuine resilience requires a broader strategy that accounts for the speed at which AI vendor relationships can change.
Avoid Single-Vendor AI Dependency
If your entire fraud detection pipeline or document processing workflow depends on one AI vendor, a disruption to that vendor disrupts your operations. Where practical, evaluate alternative vendors for critical AI functions. Even if you do not switch today, knowing your options and having evaluated alternatives puts you in a stronger position if a change is forced.
The Financial Stability Board has flagged AI vendor concentration as a systemic risk for financial services. Black Kite's 2026 Third-Party Breach Report found an average of 5.28 downstream victims per third-party breach, the highest level recorded, indicating how vendor disruptions cascade through interconnected systems. For credit unions and community banks with limited vendor management staff, concentration risk is amplified -- you have fewer resources to manage a forced transition.
Understand Your Vendor's Vendor
The Anthropic episode illustrates that your vendor's AI provider can become your problem. Ask your core banking provider, LOS vendor, fraud detection platform, and digital banking provider whether they use Anthropic, OpenAI, Google, or other foundation models. Map those upstream dependencies so you understand your full exposure. A credit union's CUSO, a bank's core processor, and a mortgage company's LOS vendor may all share the same upstream AI dependency without any of their customers knowing.
Build Internal AI Competency
You do not need to build your own AI models. But you do need staff who can evaluate AI vendor claims, test AI outputs, and make informed decisions about AI risk. Invest in AI literacy for your compliance, technology, and operations teams. For smaller institutions, this might mean designating one person as the AI vendor oversight lead rather than spreading the responsibility across a team that is already stretched thin.
Regular Vendor Reviews
Annual vendor reviews are not sufficient for AI vendors. AI technology changes faster than traditional software. Schedule quarterly reviews for vendors whose AI touches lending decisions, transaction monitoring, member data, or compliance-sensitive functions. Between reviews, require vendors to notify you of material changes to their AI components, including upstream provider changes.
ABT works with 750+ financial institutions to manage their technology vendor ecosystem, including the due diligence and ongoing monitoring that AI vendor relationships now require. As the largest Tier-1 Microsoft Cloud Solution Provider dedicated to financial services, ABT helps banks, credit unions, and mortgage companies build vendor risk management strategies that protect operations and maintain regulatory compliance across OCC, FDIC, NCUA, and GSE requirements.
For more on protecting your technology environment, see our analysis of OWASP Top 10 for Agentic AI in Financial Institutions and our guide to the Treasury AI Risk Framework for Financial Institutions.
Frequently Asked Questions
Agentic AI Won't Wait for Your Governance Framework
Two paths to AI readiness -- pick the one that fits where you are today.
AI Readiness Scan
Automated evaluation of your M365 tenant's AI readiness across data governance, permissions, and compliance.
Start Your AI Readiness ScanAI Strategy Session
30-minute consultation with a specialist who has guided 200+ financial institutions through AI adoption.
Talk to an ABT financial services AI specialistFHFA terminated its Anthropic contract in March 2026 following President Trump's directive ordering federal agencies to stop using Anthropic technology. The directive came after Anthropic refused to remove AI safety restrictions for Pentagon use, and Defense Secretary Hegseth designated Anthropic as a supply-chain risk to national security. FHFA extended the termination to include Fannie Mae and Freddie Mac, sending a signal to the entire financial services industry about AI vendor risk.
The federal ban applies to government agencies and contractors, not directly to private financial institutions. However, FHFA regulates Fannie Mae and Freddie Mac, which set requirements for mortgage seller/servicers. Banks under OCC and FDIC oversight, credit unions under NCUA, and mortgage companies with GSE relationships should all monitor whether regulatory guidance extends vendor restrictions. Regardless of direct applicability, every financial institution should use this as a catalyst to assess its own AI vendor dependencies.
Financial institutions should prioritize upstream model provider risk (vendor-within-a-vendor dependencies), vendor concentration across critical functions, data handling and privacy practices for customer and member information, model transparency and auditability, business continuity planning if the vendor loses AI capabilities, and alignment with their primary regulator's expectations -- whether OCC, FDIC, NCUA, state regulators, or GSE requirements under Freddie Mac Bulletin 2025-16.
Banks follow OCC Bulletin 2023-17 and FDIC FIL-2023-29 for third-party risk management, which now encompass AI vendors. Credit unions operate under NCUA guidance, but GAO has identified a gap in NCUA's authority to directly examine credit union technology service providers. Mortgage companies face explicit AI governance requirements under Freddie Mac Bulletin 2025-16. All three institution types should apply the same foundational AI vendor assessment framework, but tailor contract provisions and reporting to their specific regulatory requirements.
Key contract provisions include AI model transparency and upstream provider disclosure, audit rights for model performance and bias testing, advance notification of model changes, data portability and exit provisions, prohibition on using customer or member data for model training without consent, and requirements for regulatory compliance support. Institutions should also require vendors to maintain documented contingency plans for upstream AI provider disruption.
With 73% of financial institutions having two or fewer staff dedicated to vendor risk management, smaller institutions should focus on three priorities: first, inventory which vendors use AI and identify upstream model providers; second, update contracts for critical AI vendors to include model transparency, change notification, and exit provisions; third, work with a managed technology partner who can monitor AI vendor dependencies and provide the specialized assessment capability that smaller teams cannot maintain in-house.