State Law13 min readFebruary 23, 2026

Colorado AI Act (SB24-205): Complete Employer Compliance Guide

Colorado's AI Act creates nation-leading requirements for high-risk AI in employment. Impact assessments, consumer notifications, opt-outs, and risk management explained with implementation guidance.

DB
Devyn Bartell
Founder & CEO, EmployArmor
Published February 23, 2026

Colorado Senate Bill 24-205, enacted in May 2024 and effective February 1, 2026, establishes one of the most comprehensive AI regulatory frameworks in the United States. The law takes a risk-based approach, imposing strict requirements on "high-risk artificial intelligence systems" that make or substantially assist consequential decisions—including all employment and hiring decisions.

For employers using AI in hiring, promotion, termination, or compensation decisions affecting Colorado residents, SB24-205 creates mandatory impact assessments, consumer notification requirements, opt-out rights, appeal processes, and substantial penalties for non-compliance enforced by Colorado Attorney General Phil Weiser.

Key Dates & Enforcement

  • Enacted: May 17, 2024
  • Effective Date: February 1, 2026
  • Rulemaking Period: Ongoing through January 2026
  • Enforced By: Colorado Attorney General Phil Weiser
  • Penalties: Up to $20,000 per violation
  • Cure Period: 60 days after notice of violation (before penalties imposed)

Understanding "High-Risk" AI Systems for Employment

SB24-205 defines a high-risk AI system as any artificial intelligence system that, when deployed, makes or is a substantial factor in making a consequential decision. Employment decisions are explicitly enumerated as consequential decisions under the law.

Employment Decisions Covered

Under Colorado law, the following employment actions are consequential decisions subject to high-risk AI regulations:

  • Hiring and recruitment: Resume screening, candidate ranking, video interview analysis, skills assessments that influence hiring decisions
  • Promotions and advancement: Performance evaluation systems, succession planning algorithms, internal mobility recommendations
  • Demotions and terminations: Performance scoring systems that lead to termination, automated flagging for corrective action
  • Compensation decisions: Algorithmic salary determination, bonus calculations, pay equity analysis that influences compensation
  • Benefits allocation: AI systems determining benefit eligibility or allocation
  • Performance monitoring: Productivity tracking systems that substantially influence management decisions

Common Misconception

"Our AI only makes recommendations; humans make the final decisions, so we're not covered." Wrong.If AI is a "substantial factor" in the decision—even if a human has final authority—the law applies. If your recruiter relies on AI to narrow 500 applicants to 20, that AI is a substantial factor and is covered.

Core Compliance Requirements

1. Impact Assessments (§ 6-1-1306)

Before deploying a high-risk AI system for employment decisions, deployers (employers) must complete a comprehensive impact assessment. This is not optional—it's a legal prerequisite to deployment.

Required Elements of an Impact Assessment:

  • System identification: Name, version, vendor, developer, and purpose of the AI system
  • Intended use and benefits: What employment decisions the AI supports and expected benefits (efficiency, consistency, reduced bias)
  • Data inventory: Categories of data collected and processed, data sources, data retention periods, and data protection measures
  • Algorithmic discrimination analysis: Known or reasonably foreseeable risks that the AI may result in algorithmic discrimination based on protected characteristics (race, sex, age, disability, etc.)
  • Mitigation measures: Specific steps taken to mitigate identified risks, including technical safeguards, human oversight, and bias testing
  • Transparency mechanisms: How consumers will be informed about AI use and how they can exercise their rights
  • Post-deployment monitoring: Procedures for ongoing monitoring of AI performance and discrimination metrics
  • Consumer feedback: How consumer feedback and complaints will be received and addressed

Update Requirements:

  • Impact assessments must be reviewed and updated at least annually
  • Updates required whenever there are material changes to the AI system (model updates, new data sources, changes to decision criteria)
  • Updates required if post-deployment monitoring reveals previously unknown risks

Documentation and Submission:

  • Impact assessments must be documented in writing and retained for at least 3 years
  • Deployers must make impact assessments available to the Attorney General upon request
  • The Attorney General may request assessments as part of investigations or proactive oversight

2. Consumer Notifications (§ 6-1-1307)

Before a high-risk AI system is used to make a consequential decision about a Colorado consumer (including job applicants), the deployer must provide clear and conspicuous notice.

Required Notice Elements:

  • AI use disclosure: Clear statement that an automated decision system or AI is being used
  • Purpose statement: Explanation of the purpose of the AI system and what it evaluates
  • Decision type: Description of the type of consequential decision being made (hiring, promotion, etc.)
  • Contact information: How consumers can contact the deployer with questions or concerns about AI use
  • Rights information: Notice of consumer rights including appeal, data correction, and (where applicable) opt-out rights

Notice Timing and Method:

  • Notice must be provided before the AI system is used to make or substantially contribute to a decision
  • For hiring, best practice is to include notice in job postings or at application initiation
  • Notice must be "clear and conspicuous"—not buried in lengthy privacy policies or terms of service
  • Notice should be provided in plain language accessible to the average consumer

3. Statement of Adverse Decision (§ 6-1-1308)

If a high-risk AI system contributes to an adverse consequential decision (rejection of job application, denial of promotion, termination), the deployer must provide the affected consumer with a statement containing:

  • AI involvement disclosure: Notice that a high-risk AI system was used to make or substantially contribute to the decision
  • Principal reasons: The principal factors, inputs, or reasons that led to the adverse decision
  • Appeal process: Information about how to appeal the decision and request human review
  • Data correction: How the consumer can correct any inaccurate personal data that was used
  • Contact information: Who to contact with questions or to exercise rights

Important: The statement must be provided in a timely manner—generally within a reasonable time after the decision is made, similar to adverse action notice requirements under the Fair Credit Reporting Act.

4. Risk Management Policy and Procedures (§ 6-1-1306)

Deployers must implement and maintain a risk management policy and program governing their use of high-risk AI systems. The policy must:

  • Identify and mitigate risks: Establish procedures for identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination
  • Human oversight: Ensure that all consequential decisions involve meaningful human review and that humans have authority to override AI recommendations
  • Consumer rights procedures: Implement processes for consumers to appeal decisions, correct data, and opt out where applicable
  • Documentation: Maintain records of AI system use, decisions made, and mitigation measures
  • Training: Train employees who interact with high-risk AI systems on compliance obligations and bias awareness
  • Monitoring: Conduct ongoing monitoring of AI system performance, including testing for disparate impact and discriminatory outcomes

Consumer Rights Under SB24-205

Right to Notice

Consumers have the right to be informed before AI is used in consequential decisions affecting them. This right is absolute—there are no exceptions for trade secrets or proprietary systems.

Right to Explanation

After an adverse decision, consumers have the right to receive a meaningful explanation of the principal reasons for the decision. The explanation must be substantive—not just "AI was used" but what factors led to the outcome.

Right to Appeal and Human Review

Consumers can request human review of AI-influenced adverse decisions. The human reviewer must:

  • Have authority to reverse the AI's recommendation
  • Consider information beyond what the AI analyzed
  • Apply human judgment and discretion
  • Not be bound by the AI's output

Right to Correct Data

If inaccurate data contributed to an adverse decision, consumers have the right to correct that data and request reconsideration of the decision based on accurate information.

Right to Opt-Out (Limited Circumstances)

In certain contexts, consumers may opt out of profiling or automated decision-making. For employment, opt-out rights are more limited—employers can demonstrate that automated processing is necessary to complete the hiring process, though human oversight must still be provided.

Enforcement and Penalties

Enforcement Authority

The Colorado Attorney General has exclusive authority to enforce SB24-205. Current AG Phil Weiser has signaled that AI regulation is a priority for his office, establishing a dedicated Technology and Data Privacy Unit.

Penalty Structure

  • Civil penalties: Up to $20,000 per violation
  • Ongoing violations: Each day of continued non-compliance may constitute a separate violation
  • Injunctive relief: AG can seek court orders requiring deployers to cease using non-compliant AI
  • Corrective actions: Courts may order specific compliance measures, retraining of AI systems, or disclosure of compliance documentation

Affirmative Defense (§ 6-1-1310)

SB24-205 provides a limited affirmative defense if:

  1. The deployer discovers and cures a violation within 90 days of the AG's notice
  2. The violation was not intentional or reckless
  3. The deployer provides documentation demonstrating cure

Limitation: This defense is available only for the first violation and cannot be used for willful or bad faith violations.

No Private Right of Action

Unlike Illinois' BIPA (biometric privacy law), SB24-205 does not create a private right of action—individuals cannot sue directly for violations. Only the Attorney General can bring enforcement actions.

However: Individuals can still pursue discrimination claims under Colorado's Anti-Discrimination Act (CADA) or federal civil rights laws. SB24-205 violations may serve as evidence in those cases.

Comparison to Other State AI Laws

FeatureColoradoCaliforniaIllinoisNYC
Effective DateFeb 1, 2026Jan 1, 2026Jan 1, 2026Jul 5, 2023
Impact Assessment✓ Required✓ Risk assessment✓ Recommended✓ Bias audit
Pre-Use Notice✓ Required✓ Required✓ Required✓ Required (10 days)
Adverse Decision Notice✓ RequiredLimited
Appeal Rights✓ Human review✓ Opt-outLimited✓ Alternative process
Penalties$20,000/violation$2,500-$7,500AG enforcement$500-$1,500/day
Private Right of ActionNoLimited (breach)NoNo

Colorado's approach combines California's risk assessment framework with NYC's emphasis on transparency and consumer rights. It's more prescriptive than California but less technically specific than NYC's bias audit requirements.

Implementation Roadmap

Phase 1: Assessment (Now – December 2025)

  • ☐ Inventory all AI tools used in employment decisions
  • ☐ Identify which qualify as "high-risk" under Colorado definition
  • ☐ Determine which positions/candidates are affected (Colorado residents)
  • ☐ Review vendor contracts for compliance support obligations
  • ☐ Establish internal compliance team and assign responsibilities

Phase 2: Impact Assessments (December 2025 – January 2026)

  • ☐ Complete impact assessment for each high-risk AI system
  • ☐ Conduct or obtain bias testing results
  • ☐ Document risk mitigation measures
  • ☐ Obtain vendor cooperation and documentation
  • ☐ Legal review of impact assessments

Phase 3: Policy Development (January 2026)

  • ☐ Draft risk management policy
  • ☐ Create consumer notification templates
  • ☐ Create adverse decision statement templates
  • ☐ Establish appeal and human review procedures
  • ☐ Develop data correction processes
  • ☐ Create training materials for HR staff

Phase 4: Implementation (January 2026)

  • ☐ Train HR staff and hiring managers
  • ☐ Integrate notifications into hiring workflows
  • ☐ Configure systems to track AI use and consumer rights requests
  • ☐ Test all processes end-to-end
  • ☐ Establish monitoring and documentation procedures

Phase 5: Ongoing Compliance (Post-February 2026)

  • ☐ Annual impact assessment reviews
  • ☐ Quarterly monitoring of AI outcomes for bias
  • ☐ Monthly review of consumer rights requests
  • ☐ Ongoing training refreshers
  • ☐ Monitor regulatory guidance from Attorney General

Sample Impact Assessment Template

Impact Assessment for [AI System Name]

1. System Identification

  • AI System Name: [e.g., HireVue Video Interview Assessment]
  • Vendor/Developer: [HireVue, Inc.]
  • Version: [3.2.1]
  • Deployment Date: [MM/DD/YYYY]

2. Purpose and Intended Use

This AI system analyzes video interview responses to evaluate candidate qualifications for [job role]. The system scores candidates based on verbal content, communication skills, and problem-solving approaches to help recruiters identify candidates for in-person interviews.

3. Data Processed

  • Video recordings of candidate responses
  • Audio transcripts and speech patterns
  • Resume data and application responses
  • Assessment question responses

4. Discrimination Risk Analysis

Identified risk: Speech pattern analysis may disadvantage candidates with accents or speech impediments. Mitigation: Humans review all AI scores before decisions; candidates can request alternative assessment formats under ADA accommodations.

5. Bias Testing Results

Annual bias audit conducted December 2025 showed selection rates: [Include specific data by protected category]. No statistically significant disparate impact detected. Results available upon request.

6. Consumer Rights Implementation

Candidates notified of AI use in job postings and application confirmations. Adverse decision notices provided within 5 business days. Human review available upon request within 14 days.

Common Compliance Pitfalls

Pitfall #1: Assuming Vendor Compliance = Your Compliance

Even if your AI vendor is "Colorado compliant," you as the deployer are responsible for conducting impact assessments, providing notifications, and implementing risk management. Vendor support is helpful but doesn't eliminate your obligations.

Pitfall #2: Incomplete Impact Assessments

Generic or superficial impact assessments won't satisfy the law. The Attorney General can request assessments and will scrutinize whether they genuinely analyze discrimination risks specific to your use case.

Pitfall #3: Failing to Update After System Changes

When your vendor releases a new model version or you change how the AI is configured, you must update the impact assessment. Don't assume the old assessment covers new functionality.

Pitfall #4: Inadequate Human Review

"Rubber-stamping" AI recommendations doesn't constitute meaningful human oversight. Human reviewers must have genuine authority to override AI and must be trained to recognize bias.

Key Takeaways

  • Colorado SB24-205 is comprehensive — impact assessments, notices, adverse decision statements, and risk management all required
  • All employment AI is likely high-risk — hiring, promotion, and termination decisions are explicitly covered
  • Enforcement is real — AG Phil Weiser has prioritized AI regulation with up to $20,000 per violation
  • 60-day cure period — first-time violators get a chance to cure before penalties, but don't rely on it
  • No private lawsuits — only AG can enforce, reducing litigation risk compared to Illinois BIPA
  • Start now — effective February 1, 2026, means impact assessments and policies should be done by January
  • Vendor cooperation essential — you need vendor bias testing data and system documentation to complete assessments

Related Resources

Ready for Colorado Compliance?

Colorado's AI Act creates complex obligations with tight deadlines. Take our free compliance scorecard to assess your readiness and get a personalized action plan.

Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.

Ready to get compliant?

Take our free 2-minute assessment to see where you stand.