ABT Blog

Colorado AI Act Countdown: 118 Days to Comply

Written by Justin Kirsch | Wed, Mar 04, 2026

Colorado AI Act Countdown: 118 Days to Comply

Colorado SB 24-205 takes effect June 30, 2026. It is the first comprehensive state AI law in the United States, and it applies to any organization that uses AI to make consequential decisions about consumers in financial services, lending, insurance, employment, or housing. If your financial institution operates in Colorado or serves Colorado consumers, you have 118 days from publication of this article to comply.

This is not theoretical regulation. The Colorado Attorney General has exclusive enforcement authority, violations trigger $20,000 penalties per consumer per transaction, and there is no private right of action to settle quietly. This article breaks down what the law requires, who it applies to, and how to build a 90-day compliance roadmap that gets your institution ready before the deadline.

What the Colorado AI Act Requires

Colorado SB 24-205, titled the Consumer Protections for Artificial Intelligence Act, was signed into law on May 17, 2024. Originally set for February 1, 2026, the effective date was pushed to June 30, 2026, after Governor Polis signed SB 25B-004 during a special legislative session in August 2025. The delay gives organizations five additional months, but the compliance requirements remain unchanged.

The law creates two categories of regulated entities: developers (organizations that build or substantially modify AI systems) and deployers (organizations that use AI systems to make decisions about consumers). Most financial institutions are deployers. Some that build custom AI models are both.

The core requirement is straightforward: deployers and developers must use "reasonable care" to protect consumers from algorithmic discrimination when using high-risk AI systems. What makes it complex is how the law defines "high-risk," what "reasonable care" means in practice, and the documentation burden that comes with compliance.

Why This Matters Right Now

The Colorado Attorney General is currently in the pre-rulemaking phase for implementing SB 24-205. Meanwhile, 1,208 AI-related bills were introduced across all 50 states in 2025, with 145 enacted into law. Colorado is the template. Texas's Responsible AI Governance Act (TRAIGA) took effect January 1, 2026. Illinois amended its Human Rights Act to cover AI discrimination effective January 1, 2026. The compliance framework you build for Colorado will become the foundation for every state AI law that follows.

Does This Law Apply to Your Financial Institution?

The coverage test is simpler than most financial regulations. Answer these three questions:

Question 1: Do you operate in Colorado or serve Colorado consumers?

If your institution has branches, offices, or employees in Colorado, the answer is yes. If your institution offers products or services to Colorado residents through online channels, the answer is almost certainly yes. Multi-state banks, credit unions with Colorado members, and mortgage companies originating Colorado loans are covered regardless of where they are headquartered.

Question 2: Do you use AI in any "consequential decision"?

The law defines a consequential decision as one with a material legal or similarly significant effect on a consumer's access to financial or lending services, employment, insurance, housing, education, healthcare, or government services. For financial institutions, this covers:

  • Credit scoring and underwriting decisions
  • Loan approval or denial
  • Pricing, interest rate, or fee determinations
  • Fraud detection that affects account access
  • Insurance pricing and claims decisions
  • Hiring and HR screening

If AI is "a substantial factor" in any of these decisions, the system qualifies as high-risk.

Question 3: Are you a developer, a deployer, or both?

If you build or substantially modify AI models, you are a developer. If you use AI systems (whether built in-house, purchased from a vendor, or accessed via API) to make consequential decisions, you are a deployer. Most banks and credit unions are deployers. Institutions that train custom machine learning models for credit risk or fraud detection may also qualify as developers, triggering additional obligations.

$20,000
maximum civil penalty per violation under the Colorado AI Act, counted separately per consumer per transaction
Source: Colorado SB 24-205, Section 6-1-1705
Colorado AI Act coverage decision tree: three questions determine whether your financial institution is subject to SB 24-205 deployer obligations.

High-Risk AI in Banking: What Qualifies

The Colorado AI Act does not ban AI in financial services. It regulates "high-risk" AI systems, defined as those that make or substantially contribute to consequential decisions about consumers. Here is how common FI AI use cases map to the law's definitions.

Credit Scoring and Underwriting (HIGH-RISK): Any AI model that factors into loan approval, credit tier assignment, pricing, or deposit requirements qualifies. This includes traditional statistical models, machine learning underwriting, and AI-enhanced credit decisioning platforms. If the model's output materially affects whether a consumer gets a loan, what rate they receive, or what terms they are offered, it is high-risk.

Fraud Detection and BSA/AML (LIKELY HIGH-RISK): If AI-driven fraud detection results in frozen accounts, declined transactions, or flagged activity that triggers SAR filings affecting the consumer's account access, it likely qualifies. The key question is whether the AI's output has a "material legal or similarly significant effect" on the consumer's access to financial services.

Hiring and HR Screening (HIGH-RISK): AI systems used in resume screening, candidate scoring, interview assessment, or performance evaluation are explicitly covered by the law. This applies to HR tools from vendors like HireVue, Eightfold, or any ATS with AI-powered scoring.

Insurance Pricing (HIGH-RISK): AI models that affect insurance pricing, coverage decisions, or claims resolution are covered. This is relevant for institutions with insurance subsidiaries or affiliates.

Customer Service Chatbots (POTENTIALLY HIGH-RISK): A chatbot that answers general questions is likely not high-risk. A chatbot that can initiate disputes, modify account settings, or influence access to financial products may cross the threshold. The determining factor is whether the chatbot's actions have a material effect on the consumer's access to services.

Marketing and Targeting (LOWER RISK): AI used for marketing personalization, campaign targeting, and content recommendation is generally not covered unless it substantially affects a consumer's access to financial products (e.g., targeted credit offers with different terms based on AI profiling).

"A bank, credit union, or affiliate thereof is in full compliance with the act if the entity is subject to examination by a state or federal prudential regulator under published guidance that applies to high-risk systems and meets criteria specified in the act."

Colorado SB 24-205, Prudential Regulator Safe Harbor Provision

The Six Compliance Requirements for Deployers

Financial institutions that qualify as deployers of high-risk AI systems must meet six specific obligations under the Colorado AI Act. Here is what each requires and how it maps to your existing compliance infrastructure.

1. Risk Management Policy and Program

Deployers must implement a documented AI risk management program that fits the size and complexity of the business. The law explicitly cites the NIST AI Risk Management Framework as a benchmark, but also accepts ISO/IEC 42001 or any framework designated by the Colorado Attorney General. For financial institutions already using NIST CSF or FFIEC frameworks, extending these to cover AI risk is the most efficient path. This maps to existing model risk management (OCC SR 11-7) and third-party risk management programs.

2. Impact Assessment for Each High-Risk System

Deployers must perform an initial impact assessment for each high-risk AI system and re-evaluate at least annually, or within 90 days after any substantial modification. The assessment must document: the purpose and intended use, how the system was evaluated for algorithmic discrimination, the data used as inputs, the outputs generated, any safeguards implemented, and the metrics used to evaluate performance. This is more granular than most existing model validation processes and requires specific attention to discrimination testing.

3. Algorithmic Discrimination Testing

The core obligation: prevent algorithmic discrimination. The law defines this as AI-driven differential treatment based on protected characteristics including race, color, religion, sex, sexual orientation, national origin, disability, or age. For financial institutions, this overlaps significantly with ECOA (Equal Credit Opportunity Act) fair lending requirements. Institutions already running disparate impact analysis for lending models have a foundation, but the Colorado AI Act extends this requirement to all high-risk AI systems, not just lending models.

4. Consumer Notification

Before using AI to make a consequential decision, deployers must notify the consumer that AI is being used. After an adverse decision, deployers must explain the role AI played and provide information about how the consumer can challenge the decision. For financial institutions, this parallels ECOA adverse action notices but adds an explicit AI disclosure requirement. The notification must include a description of the AI system and information about the consumer's right to correct personal data used by the system.

5. Record-Keeping and Documentation

Deployers must maintain records of their risk management program, impact assessments, and consumer notifications. The law does not specify a retention period, but the prudent approach is to align with existing regulatory retention requirements (typically 3-7 years for financial institutions). Records must be sufficient for the Attorney General to evaluate compliance.

6. Incident Reporting

If a deployer discovers that a high-risk AI system has caused algorithmic discrimination, the deployer must report the incident to the Colorado Attorney General within 90 days. This is a new reporting obligation that does not exist in most financial regulatory frameworks. Institutions need a process for detecting, evaluating, and reporting algorithmic discrimination events specifically to the Colorado AG, separate from existing SAR, CFPB, and regulatory reporting channels.

1,208
AI-related bills introduced across all 50 states in 2025 alone, with 145 enacted into law
Source: National Conference of State Legislatures (NCSL), 2025

What Other States Are Doing: The Patchwork Problem

Colorado is not alone. Financial institutions operating across multiple states face a rapidly expanding patchwork of AI regulations with different definitions, requirements, and enforcement mechanisms.

Texas (TRAIGA, effective January 1, 2026): Prohibits AI systems from unlawfully discriminating against protected classes. Broader in scope than Colorado, covering all AI discrimination rather than limiting to "high-risk" systems, but with less specific compliance requirements.

Illinois (Human Rights Act amendments, effective January 1, 2026): Prohibits employers from using AI that discriminates on the basis of protected class. Focused on employment decisions, but financial institutions' HR AI tools are covered.

California (SB 53, signed September 2025): The Transparency in Frontier AI Act requires developers of large AI models to publish risk frameworks, report safety incidents, and implement whistleblower protections. Also, California's FEHA AI regulations (effective October 2025) cover automated decision systems in employment.

Utah: Requires disclosure when consumers interact with generative AI. Established an AI learning laboratory for regulatory experimentation.

Federal preemption uncertainty: In December 2025, an executive order proposed establishing a uniform federal AI policy framework that could preempt state laws deemed inconsistent. The scope and enforceability of this preemption remains unclear, and financial institutions cannot rely on it to eliminate state compliance obligations.

The strategic argument is clear: build a comprehensive AI governance framework now rather than building state-by-state compliance programs later. Colorado's requirements are comprehensive enough that compliance with Colorado positions your institution well for Texas, Illinois, California, and whatever comes next.

Why This Matters Right Now

The shadow AI problem makes Colorado AI Act compliance harder than it looks. If employees at your institution are using AI tools that your compliance team does not know about, those systems create unmanaged regulatory exposure under every state AI law. An AI inventory is the first step in any compliance program, and it needs to capture both sanctioned and unsanctioned AI use.

A 90-Day Compliance Roadmap

The deadline is fixed. Here is an aggressive but realistic 90-day plan for financial institutions that have not yet started Colorado AI Act compliance preparation.

Days 1-30: Discovery and Assessment

  • AI inventory: Catalog every AI system in your environment. Include vendor-provided AI (credit scoring, fraud detection, chatbots), internally developed models, and employee-initiated AI tools (shadow AI). For each system, document: purpose, data inputs, outputs, autonomy level, and which decisions it influences.
  • Legal coverage analysis: Work with counsel to determine which systems qualify as "high-risk" under the Colorado definition. Map each system against the "consequential decision" criteria.
  • Governance committee formation: Establish an AI governance committee (or expand an existing model risk committee) with representation from compliance, legal, IT, business lines, and risk management.
  • Vendor assessment: Contact every AI vendor and request their SB 24-205 compliance documentation. Developers have obligations to provide technical documentation to deployers. If your vendor cannot provide it, flag that as a risk.

Days 31-60: Implementation

  • Impact assessments: Complete an impact assessment for each high-risk AI system following the format required by the law. Use NIST AI RMF or ISO 42001 as your framework, since the law accepts both.
  • Algorithmic discrimination testing: For each high-risk system, perform or commission bias testing across all protected characteristics. For lending models, extend your existing ECOA fair lending analysis. For non-lending systems, establish new testing protocols.
  • Consumer notification procedures: Draft pre-decision and adverse-decision consumer notification templates. Work with counsel to ensure they meet both Colorado AI Act and existing regulatory requirements (ECOA adverse action, TILA disclosures).
  • Risk management program documentation: Document your AI risk management program in a format that satisfies the law's requirements. Map your program controls to NIST AI RMF or ISO 42001.

Days 61-90: Verification and Launch

  • Documentation completion: Finalize impact assessments, risk management program documentation, and consumer notification templates. Verify completeness against the law's specific requirements.
  • Staff training: Train compliance officers, business line managers, and customer-facing staff on new AI disclosure requirements. Staff need to understand when and how to disclose AI involvement in decisions.
  • Monitoring implementation: Deploy monitoring processes to detect algorithmic discrimination, track AI system performance, and identify unauthorized AI use. Establish the 90-day incident reporting process to the Colorado AG.
  • Legal review: Final legal review of all compliance documentation, consumer notifications, and incident reporting procedures. Confirm alignment with both Colorado AI Act and existing regulatory frameworks like the Treasury AI Risk Framework.
  • Prudential regulator safe harbor evaluation: Assess whether your institution qualifies for the safe harbor provision for entities subject to examination by state or federal prudential regulators. This could significantly simplify your compliance burden.
A 90-day compliance roadmap for financial institutions preparing for the Colorado AI Act June 30, 2026 deadline.

How ABT Helps Financial Institutions Prepare

ABT has guided financial institutions through every regulatory compliance deadline since GLBA in 1999. As the largest Tier-1 Microsoft Cloud Solution Provider primarily dedicated to financial services, ABT serves 750+ financial institutions that face the same Colorado AI Act compliance challenge.

The compliance roadmap starts with knowing what AI exists in your environment. The AI Readiness Scan evaluates your Microsoft 365 tenant for AI deployment readiness, identifies governance gaps, and maps your current AI tools and configurations against the controls that regulators expect. For institutions preparing for Colorado AI Act compliance, the scan provides the AI inventory and governance assessment that forms the foundation of your risk management program.

The Guardian operating model provides the governed technology environment that AI compliance requires: identity governance through Entra ID, data protection through Purview DLP, conditional access policies that control how AI tools interact with sensitive data, and continuous monitoring that detects configuration drift. These controls do not just satisfy the Colorado AI Act. They position your institution for the OWASP agentic AI security framework, the Treasury AI Risk Framework, and whatever state or federal AI regulation comes next. Meeting these obligations requires a fundamental shift in approach — read more about the shift from annual audits to living compliance.

118 Days Until the Colorado AI Act Deadline

The clock is running. ABT's AI Readiness Scan identifies your AI governance gaps, maps your current AI deployment, and provides the foundation for your compliance program. Start with an inventory of what AI exists in your environment.

Start Your AI Readiness Scan

Frequently Asked Questions

The Colorado AI Act (SB 24-205) is the first comprehensive state AI law in the United States, signed May 17, 2024, and effective June 30, 2026. It requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination. The law covers AI used in consequential decisions about financial services, employment, insurance, housing, and healthcare.

The Colorado AI Act takes effect June 30, 2026. The original effective date was February 1, 2026, but Governor Polis signed SB 25B-004 in August 2025 delaying implementation by five months. The Attorney General has pre-enforcement rulemaking authority and is currently developing implementing regulations. A 60-day cure period applies before enforcement actions.

Yes. Banks and credit unions that use AI in consequential decisions about consumers in financial or lending services are covered. However, the law includes a safe harbor: institutions subject to examination by state or federal prudential regulators under published AI guidance may satisfy compliance through existing regulatory frameworks. Multi-state institutions serving Colorado consumers are covered regardless of headquarter location.

A high-risk AI system is one that makes, or is a substantial factor in making, a consequential decision with material legal or significant effect on a consumer. For financial institutions, this includes credit scoring, underwriting, loan approval, fraud detection affecting account access, insurance pricing, and hiring tools. The key trigger is whether the AI output materially affects the consumer's access to or terms of financial services.

Violations are classified as deceptive trade practices under the Colorado Consumer Protection Act. The maximum civil penalty is $20,000 per violation, counted separately per consumer per transaction. The Colorado Attorney General has exclusive enforcement authority with no private right of action. Organizations receive a 60-day notice and cure period before formal enforcement, but penalties can accumulate rapidly across multiple consumers.

If your institution serves Colorado consumers, likely yes. The law applies to persons "doing business in Colorado" as deployers or developers of high-risk AI systems. Financial institutions offering products to Colorado residents through online channels, multi-state lending, or digital banking services are covered regardless of where they are headquartered. The reach is similar to other state consumer protection laws.

Yes. Texas's Responsible AI Governance Act and Illinois's Human Rights Act AI amendments both took effect January 1, 2026. California signed the Transparency in Frontier AI Act in September 2025. In 2025, 1,208 AI-related bills were introduced across all 50 states, with 145 enacted. A December 2025 federal executive order proposed preempting some state AI laws, but its enforceability remains uncertain.

Justin Kirsch

CEO, Access Business Technologies

Justin Kirsch has guided financial institutions through every major regulatory compliance deadline since GLBA in 1999. As CEO of Access Business Technologies, he helps banks, credit unions, and mortgage companies prepare for emerging AI regulation, including the Colorado AI Act, the first comprehensive state AI law with real enforcement teeth.