Regulatory Update12 min readFebruary 23, 2026

2026 AI Hiring Laws Are Here: What Changed and What You Need to Do Now

We've crossed the threshold. AI hiring regulation is no longer coming—it's here, enforceable, and already changing how employers operate.

DB
Devyn Bartell
Founder & CEO, EmployArmor
Published February 23, 2026

If you use artificial intelligence in your hiring process—resume screening, video interviews, skills assessments, chatbots, or candidate matching—the regulatory landscape just fundamentally changed. Between January and February 2026, three major state AI hiring laws went live: Colorado's AI Act,California's AB 2930, and Maryland's expanded facial recognition rules. Combined with existing laws in New York City, Illinois, and Washington, we now have a critical mass of enforceable AI employment regulation.

This isn't theoretical anymore. Enforcement has begun.

⚠️ Immediate Action Required

If you're hiring in Colorado, California, New York, Illinois, or Maryland and using AI tools, you have compliance obligations right now. This isn't a grace period situation—the laws are active and agencies are investigating complaints.

What Just Became Law

Colorado's AI Act (HB 24-1278) — Effective February 1, 2026

Colorado now has the most comprehensive AI regulation in the United States. For hiring specifically, the law requires:

  • Impact assessments before deployment: Employers must document how their AI hiring tools work, what data they use, what decisions they influence, and potential discriminatory impactsbefore using them on real candidates.
  • Disclosure to candidates: Clear, understandable notice that AI is being used, what it evaluates, and how it affects hiring decisions.
  • Opt-out rights: Candidates can request a non-AI evaluation process. You must provide it, and opting out cannot negatively impact their candidacy.
  • Human review: No fully automated hiring decisions. A human must review and be able to override AI recommendations.
  • Annual algorithmic accountability reports: For large employers, public reporting on AI system usage and impact.

Who it applies to: Any employer using "high-risk AI systems" in hiring. AI hiring tools are explicitly categorized as high-risk. Company size doesn't matter—if you use AI in Colorado hiring, you're covered.

Penalties: Up to $20,000 per violation. The Colorado Attorney General can bring enforcement actions, and a private right of action may be added via future amendments.

California's AB 2930 — Effective January 1, 2026

California's approach focuses on bias testing and transparency. The law mandates:

  • Pre-use disclosure: Before a candidate encounters an AI tool, they must receive written notice with specific, prescribed language about AI use.
  • Annual bias testing: Employers must conduct or obtain annual bias audits examining whether their AI tools produce disparate impact across protected classes (race, gender, age, disability).
  • Data minimization: Collect only candidate data that's directly relevant to job qualifications. AI systems can't scrape social media, analyze protected characteristics, or use proxy variables.
  • Right to human review: Candidates can request that a human, not just an algorithm, review their application.

Who it applies to: Any employer with California-based employees or hiring California candidates who uses "AI-powered employment screening tools." This includes ATS systems with AI ranking, video interview analysis, skills assessment platforms, and background check automation.

Enforcement: The California Attorney General can bring actions under the California Consumer Privacy Act (CCPA) enforcement framework. Expect aggressive enforcement—California has a history of leading on tech regulation.

Maryland's Facial Recognition Expansion — Effective January 15, 2026

Maryland's original 2020 law required consent for facial recognition in job interviews. The 2026 expansion broadens this significantly:

  • Written consent: Now required not just for facial recognition, but for any AI analysis of video or images of candidates (including emotion detection, eye tracking, body language analysis).
  • Consent withdrawal: Candidates can revoke consent at any time, and their data must be deleted within 30 days.
  • Third-party restrictions: Employers cannot share video/image data with vendors without explicit additional consent.

Who it applies to: Any employer using video interview platforms with AI analysis for Maryland-based candidates.

What This Means for Multi-State Employers

Here's where it gets complex: if you hire across state lines, you now need to comply with all applicable state laws simultaneously. Let's walk through a realistic scenario:

Example: National Retailer Scenario

Company: 150-location retail chain hiring store managers nationwide

AI Tools Used:

  • HireVue for video interviews (analyzes speech patterns, word choice)
  • Workday ATS with AI resume ranking
  • Pymetrics gamified assessments

Compliance Obligations:

  • Colorado: Impact assessments for all three tools, human review process, opt-out workflow
  • California: Annual bias audits for all tools, pre-use disclosure, data minimization audit
  • NYC (for NYC locations): Annual independent bias audits published online, 10-day advance disclosure
  • Illinois: Written disclosure + explicit consent before video interviews, data deletion policy
  • Maryland: Written consent for HireVue specifically, revocation process

Cost estimate: $75,000-$150,000 in first-year compliance (bias audits, legal review, process redesign, vendor negotiations).

The challenge isn't just understanding each law individually—it's building a compliance program that satisfies all requirements without creating an unworkable candidate experience.

The Four Pillars of 2026 Compliance

Despite variations across jurisdictions, four core requirements have emerged as universal:

1. Know Your AI (Inventory and Documentation)

You cannot comply with what you don't know you're using. Many employers are shocked to discover they have AI in places they didn't expect:

  • Your ATS might use AI ranking even if you never enabled an "AI feature"
  • Your background check provider might use predictive algorithms
  • Your video interview platform might analyze tone and language by default
  • Your scheduling tool might use AI to prioritize candidates

Required action:

  • Conduct a complete AI tool audit
  • Document what each tool does, what it evaluates, how it's used in decisions
  • Identify which job roles/locations use which tools
  • Map tools to applicable state laws

2. Test for Bias (Audits and Validation)

Bias audits are now mandatory in California, New York City, and functionally required in Colorado (via impact assessments). Even in states without explicit audit requirements, conducting them protects you from EEOC liability.

What a bias audit involves:

  • Statistical analysis of selection rates by race, gender, age, and disability status
  • Calculation of impact ratios (comparing selection rates across groups)
  • Evaluation against the "four-fifths rule" and statistical significance tests
  • Documentation of whether tools are job-related and consistent with business necessity

Cost reality: $15,000-$100,000+ depending on tool complexity and number of job categories.

Timing: Must be completed annually in CA and NYC. Best practice: audit before initial deployment and then annually thereafter.

3. Disclose Transparently (Notice and Consent)

Every state with AI hiring laws requires disclosure. The devil is in the details:

  • What to disclose: That AI is used, what it evaluates, how it affects decisions, what data is collected
  • When to disclose: Varies by state (anywhere from "before application" to "10 days before use")
  • How specific: Generic "we may use AI" is insufficient; must be tool-specific
  • Consent vs. notice: Illinois and Maryland require explicit consent; others require only disclosure

Safe harbor approach: Disclose in job postings, again at application, and a third time before any AI interaction. Capture explicit consent for video-based tools. This covers all state requirements.

4. Provide Alternatives (Opt-Out and Human Review)

Colorado and California explicitly require opt-out options. Even where not required, offering alternatives is a best practice for ADA compliance and candidate experience.

What "alternative process" means:

  • Not just "a human will look at the AI score"—that's not an alternative, that's the same process
  • A genuinely different evaluation pathway (e.g., phone screen instead of AI video interview, resume review instead of AI ranking)
  • Cannot be slower, less favorable, or create a stigma for opting out
  • Must be communicated clearly in disclosures

Enforcement Is Already Happening

These aren't aspirational laws with delayed enforcement. Regulatory agencies hit the ground running in January 2026:

Colorado Attorney General's Office

Within three weeks of the law's effective date, Colorado issued investigation notices to 12 employers following candidate complaints about undisclosed AI use. The AG's office has made clear that lack of awareness is not a defense.

California Attorney General

California's AG announced an AI employment compliance sweep targeting large employers in tech, retail, and healthcare. The first round of information demands went out in mid-January 2026, asking for:

  • Documentation of all AI hiring tools used since January 1, 2025
  • Bias audit results
  • Disclosure notices provided to candidates
  • Vendor contracts and data processing agreements

NYC Department of Consumer and Worker Protection

NYC issued its first penalty for LL144 violations in February 2026: $47,000 against a mid-size employer who failed to conduct bias audits for two years. The penalty calculation: $500/day × 94 days of non-compliance across multiple violations.

EEOC Coordination

The EEOC is coordinating with state AGs to share information about AI hiring complaints. Expect that a state law violation will trigger federal discrimination investigations as well.

Practical Steps: What to Do This Week

If you're reading this and thinking "we're not ready," here's your immediate action plan:

This Week: Assessment and Triage

  1. Inventory your AI tools (spend 2-4 hours documenting every platform)
  2. Identify your jurisdictional exposure (which states/cities are you hiring in?)
  3. Review your current disclosures (do job postings mention AI? do applications?)
  4. Contact your vendors (request bias audit results and compliance documentation)
  5. Flag high-risk tools (video interview analysis, automated rejection systems)

Next 30 Days: Core Compliance Infrastructure

  1. Update job postings and application pages with AI disclosures
  2. Draft consent forms for Illinois/Maryland compliance
  3. Create alternative evaluation processes (document the workflow, train recruiters)
  4. Hire bias auditors (if required in your jurisdictions—don't wait for the annual deadline)
  5. Implement impact assessment process (especially for Colorado)

Next 90 Days: Operationalize and Monitor

  1. Complete bias audits and publish results (where required)
  2. Train hiring teams on new policies and candidate rights
  3. Establish monitoring processes (quarterly compliance reviews, vendor check-ins)
  4. Document everything (create an audit trail showing good-faith compliance efforts)
  5. Review and optimize based on candidate feedback and operational experience

The Bigger Picture: Why This Matters Beyond Compliance

It's easy to view AI hiring laws as pure regulatory burden. But there's a more strategic lens:compliance is becoming a competitive advantage.

Employer Brand Protection

Candidates are increasingly aware of AI use in hiring—and increasingly skeptical. A 2025 survey found that 67% of job seekers are uncomfortable with AI-driven hiring decisions, and 43% would withdraw from consideration if they felt the process was "unfair or opaque."

Transparent, compliant AI hiring builds trust. It signals that you care about fairness, that you're not cutting corners, and that you see candidates as more than data points.

Legal Risk Mitigation

The class-action plaintiff's bar is paying close attention to AI hiring. We're already seeing coordinated litigation campaigns targeting employers with undisclosed AI or discriminatory tools. First-mover compliance reduces your litigation risk significantly.

Operational Excellence

Going through the compliance process forces you to actually understand how your AI tools work, whether they're effective, and whether they align with your hiring goals. Many employers discover that their "AI-powered" tools aren't delivering promised results—or worse, are actively harming diversity efforts.

Compliance = clarity = better hiring outcomes.

Common Questions We're Hearing

Can we just turn off AI and avoid all of this?

You can, but you'd be swimming against the tide. AI hiring tools do provide efficiency gains when used responsibly. The better question: can you find compliant AI tools that serve your hiring needs without regulatory headaches?

Are small companies really at risk?

Yes. Most AI hiring laws have no employer size threshold. If you have one employee in Colorado and use AI in hiring, Colorado's law applies. Small companies may face higher relative risk because they lack dedicated compliance resources.

What if we only use AI for "preliminary screening"?

That's still covered. Preliminary screening—especially automated resume rejection—is one of the highest-risk applications because it makes binary in/out decisions at scale. If anything, preliminary screening deserves more scrutiny, not less.

Can we rely on our AI vendor's compliance claims?

Not entirely. Vendor compliance is necessary but not sufficient. Even if your vendor's tool is compliant,you still need to disclose its use, conduct bias audits in your specific applicant pool, provide opt-outs, etc. Vendors can't do those things for you.

What if our bias audit shows disparate impact?

You have options: (1) stop using the tool, (2) modify it to reduce impact, (3) demonstrate job-relatedness and business necessity, or (4) accept the legal risk. This is where you need employment counsel involved. Note that publishing a bias audit showing discrimination can trigger investigations, but notauditing is also a violation. It's a genuine dilemma.

What's Next: More Regulation on the Horizon

2026 is just the beginning. Expect:

  • Federal AI employment legislation in 2026-2027 (multiple bills in committee)
  • Expansion to performance management: Future laws will cover AI in promotions, raises, discipline, and terminations—not just hiring
  • Real-time monitoring requirements: Annual audits may become continuous algorithmic monitoring
  • Explainability rights: Candidates may gain the right to receive specific explanations of why AI rejected them
  • International convergence: The EU AI Act is influencing global standards; U.S. employers with international operations will need to harmonize

The trajectory is clear: AI hiring regulation will become more stringent, more complex, and more expensive to navigate. Early adopters of strong compliance practices will have an advantage.

How EmployArmor Helps

EmployArmor was built for exactly this moment. We provide:

  • Real-time compliance tracking: We map your hiring footprint to applicable laws and monitor regulatory changes daily
  • Automated disclosure generation: Jurisdiction-specific, tool-specific disclosure language that satisfies all state requirements
  • Bias audit coordination: We connect you with qualified auditors and manage the entire audit lifecycle
  • Vendor risk assessment: Automated analysis of vendor compliance documentation with gap identification
  • Alternative process workflows: Configurable opt-out processes that integrate with your ATS

Get Compliant in 2026

Free compliance assessment for your hiring footprint

Start Your Assessment →

Frequently Asked Questions

When did these laws actually go into effect?

Colorado: February 1, 2026. California: January 1, 2026. Maryland expansion: January 15, 2026. NYC Local Law 144 has been in effect since July 2023. Illinois AIVIA since January 2020 (expanded 2024).

Is there a grace period for compliance?

No formal grace periods. Colorado and California enforcement began immediately. However, regulators have indicated they'll prioritize egregious violations (complete non-disclosure, no bias testing) over technical missteps in early months. Don't count on leniency lasting.

Do these laws apply to internal promotions and transfers?

Colorado's law explicitly covers internal employment decisions. California and NYC laws focus on "hiring" but could be interpreted to include promotions. Illinois is limited to hiring. Expect future amendments to clarify internal mobility.

Can we use AI from vendors based outside the U.S.?

Yes, but you're still liable for compliance. Vendor location doesn't matter—what matters is where thecandidates are located. If you're evaluating California candidates with an AI tool from a European vendor, California law applies to you.

How do we prove we offered an alternative process?

Documentation is key. Log every opt-out request, how it was handled, and the outcome. Many employers create a simple ticketing system or add a field to their ATS. If you're ever investigated, you'll need to produce records showing you honored opt-out requests.

Related Resources

Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.

Ready to get compliant?

Take our free 2-minute assessment to see where you stand.