Regulation

What Counts as High-Risk AI in Employment Under Colorado Law?

Colorado SB 24-205 defines high-risk AI systems in employment. Learn what qualifies, what's excluded, and how to assess your HR and hiring tools for Colorado AI Act compliance obligations.

<div class="bg-amber-50 border-l-4 border-amber-400 p-4 my-6"> > **Educational Resource:** This guide is for informational purposes only and does not constitute legal advice. Colorado SB 24-205 is subject to ongoing regulatory rulemaking that may refine these definitions. Consult qualified legal counsel to assess your specific AI tools under Colorado law. </div>

Colorado's AI Act (SB 24-205) uses the concept of "high-risk AI systems" as its organizing principle. If your AI tool qualifies as high-risk in employment, you're subject to impact assessment requirements, candidate notification obligations, appeal rights, and potential penalties of up to $20,000 per violation.

If it doesn't qualify, most SB 24-205 obligations don't apply.

This guide explains exactly what makes an AI system "high-risk" in an employment context under Colorado law, gives concrete examples of tools that do and don't qualify, and explains the gray areas where employers need to think carefully.

<div class="bg-blue-50 border-l-4 border-blue-500 p-6 my-8"> **Key Dates**
  • Enacted: May 17, 2024
  • Effective Date: June 30, 2026 (extended from February 1, 2026)
  • Rulemaking: Ongoing — final rules may refine these definitions
  • Enforcement: Colorado Attorney General Phil Weiser
</div>

The Statutory Definition: Two Requirements

Under Colorado SB 24-205, a high-risk artificial intelligence system is an AI system that:

Requirement 1: Uses machine learning, statistical modeling, data analytics, or AI

AND

Requirement 2: When deployed by a developer or deployer, makes or is a substantial factor in making a consequential decision

Employment decisions are explicitly listed as consequential decisions under the statute.


Unpacking "Consequential Decision" in Employment

The statute lists employment decisions as a category of consequential decision. Specifically covered decisions include:

  • Hiring and employment decisions
  • Termination of employment
  • Promotion or demotion
  • Pay or compensation decisions
  • Benefits allocation
  • Performance evaluation that affects employment status or compensation
  • Training and development opportunities

The law is broad. If AI plays a substantial role in any of these decisions for an employee or candidate, it likely meets this prong.


The Critical Question: "Substantial Factor"

The "substantial factor" requirement is where the analysis gets nuanced. The law does NOT require that AI be the sole or final decision-maker. AI that merely informs or assists a human decision-maker can still be a substantial factor.

<div class="bg-orange-50 border-l-4 border-orange-500 p-4 my-6"> **The Common Misconception**

"Our recruiters make the final call, so the AI is just a tool — not a substantial factor."

This argument has been specifically rejected by the Colorado legislative framework. If AI narrows a pool of 300 candidates to 20 for human review, that AI is a substantial factor even though a human reviews the final 20. The candidates excluded by the AI never got human consideration.

</div>

Factors that suggest "substantial factor" status:

  • The AI output is given significant weight in the decision
  • Candidates below a certain AI threshold are never reviewed by humans
  • The AI is one of a defined set of criteria, and is weighted heavily
  • The decision-maker would need a specific reason to override the AI recommendation

Factors that suggest the AI may NOT be a substantial factor:

  • AI output is used only as a low-weight input among many
  • Humans independently evaluate all candidates regardless of AI scores
  • AI is used only for administrative or logistical purposes (scheduling, communication routing)
  • The AI provides general information without candidate-specific scoring or ranking

Concrete Examples: What Qualifies as High-Risk in Employment?

Likely High-Risk ✓

Resume Screening AI

  • ATS features that score, rank, or filter applications using machine learning
  • Tools like LinkedIn Recruiter's "Recommended Matches," Indeed's Match Score, or ATS-native AI ranking
  • Why: Machine learning system producing candidate-specific output (score/ranking) that substantially filters who advances

Video Interview Analysis Platforms

  • HireVue, Spark Hire, or similar platforms using AI to analyze facial expressions, tone, speech patterns
  • Why: AI produces simplified output (score/recommendation) based on statistical modeling of candidate behavior

Cognitive and Skills Assessments with AI Scoring

  • Assessment platforms where candidate responses are algorithmically scored and used to rank/filter applicants
  • Why: Statistical modeling produces output that substantially assists advancement decisions

Performance Management AI

  • Systems that produce employee performance scores or risk-of-termination predictions used by managers in evaluation
  • Why: AI output is substantial factor in consequential employment decision (performance rating, pay, advancement)

Succession Planning and Internal Mobility AI

  • Tools that recommend employees for advancement, promotion, or development programs using ML models
  • Why: ML recommendation is substantial factor in consequential promotion/development decision

Not Likely High-Risk ✗

Basic ATS Functionality

  • Applicant tracking without AI scoring or ranking
  • Calendar scheduling and application routing
  • Keyword search without ML-based ranking or scoring

Communication Tools

  • Email platforms, calendar tools, messaging applications without analytical features

HR Analytics Dashboards

  • Aggregate workforce analytics that inform organizational strategy but don't make individual-level consequential decisions
  • Example: Workforce planning tool showing attrition trends by department (not individual scores)

Compliance and Documentation Tools

  • Tools that generate required compliance documents, track consent, or manage records
  • They process data but don't make consequential employment decisions

Background Check Platforms (Standard)

  • Standard criminal background check or employment verification services
  • Note: If the platform uses AI to produce a risk score that substantially influences hiring, it may qualify

Gray Areas: Requires Careful Analysis

ATS with Optional AI Features If your ATS has AI ranking features you've disabled, the disabled feature is likely not covered. Document that you've disabled it and don't rely on the AI output.

Vendor AI You Didn't Know About Many ATS and HRIS platforms have quietly added AI features. "We didn't know the feature existed" is not a legal defense under SB 24-205. Audit your platforms and ask vendors directly: does your system use machine learning, statistical modeling, or AI to score, rank, or evaluate individual candidates or employees?

AI Used for Job Recommendations to Candidates When Indeed or LinkedIn recommends your job posting to candidates using AI targeting, does that make your job posting a high-risk system? Current analysis suggests employer-side tools (your use of AI to evaluate candidates) are the primary target of SB 24-205, not platform-side job matching. But how employers use targeting features to include or exclude candidate groups may still have EEOC implications.

Generative AI Assistants (ChatGPT, Copilot) If a recruiter uses a generative AI to summarize resumes or draft interview notes, and that AI-generated summary substantially influences who advances, there's a reasonable argument that qualifies. This is an evolving area. Document your use policies.

HR Chatbots with Screening Features Chatbots that screen candidates (ask qualifying questions and route based on responses) using ML-based intent classification may qualify if the routing decision substantially affects advancement.


What Colorado Rulemaking May Clarify

The Colorado Attorney General's office is conducting rulemaking through 2025-2026 that will provide more specific guidance on:

  • Minimum thresholds for "substantial factor" determination
  • Documentation and record retention requirements
  • Specific content requirements for consumer notifications
  • How to handle AI tools where data for bias assessment is unavailable

Employers should monitor the Colorado AG's rulemaking proceedings. EmployArmor tracks these changes and updates compliance guidance automatically.


Building Your High-Risk AI Inventory

The first compliance step under Colorado SB 24-205 is knowing what you're working with. Recommended approach:

  1. Audit all HR/hiring platforms — List every software tool used in hiring, performance management, and employment decisions
  2. Ask each vendor directly: Does your platform use machine learning, AI, or statistical modeling to produce individual-level scores, rankings, or recommendations for employment decisions?
  3. Classify each tool as likely high-risk, likely not high-risk, or requires further analysis
  4. Complete impact assessments for all likely high-risk tools before June 30, 2026

For tools in the "requires further analysis" category, document your analysis in writing. If you later determine the tool does qualify, having documented your good-faith analysis demonstrates proactive compliance.


Get a Colorado AI Act compliance assessment for your hiring stack: Start Free Assessment →


Legal Disclaimer: This content is provided for educational purposes and is not legal advice. Colorado SB 24-205 is subject to ongoing rulemaking that may change specific requirements and definitions. Always consult with qualified legal professionals for advice specific to your AI tools and situation. EmployArmor does not provide legal services.

Last updated: April 2026

Ready to comply?

Get your personalized compliance assessment in 2 minutes — free.