As of February 2026, 17 states and 23 municipalities have active AI hiring laws. Another 12 states have pending legislation. For employers hiring across multiple jurisdictions, this creates a compliance nightmare: overlapping requirements, conflicting standards, and constantly shifting regulations.
This guide breaks down every active AI hiring law in the United States, organized by compliance requirement, penalty structure, and practical impact.
Quick Navigation:
- • Comprehensive Regulation States (Tier 1)
- • Targeted Regulation States (Tier 2)
- • Pending Legislation States (Tier 3)
- • Full Comparison Table
- • Multi-State Compliance Strategy
Understanding the Regulatory Tiers
Not all AI hiring laws are created equal. We've organized states into three tiers:
- Tier 1 (Comprehensive): Broad coverage of AI tools + multiple compliance requirements (disclosure, audits, impact assessments) + meaningful penalties
- Tier 2 (Targeted): Narrow scope (e.g., facial recognition only) or limited requirements (disclosure-only)
- Tier 3 (Pending): Legislation introduced but not yet passed; worth monitoring
Tier 1: Comprehensive Regulation States
These states have the most demanding AI hiring compliance frameworks:
Illinois - AIVIA (820 ILCS 42)
Effective: January 1, 2020 (expanded via HB 3773, January 1, 2025)
Scope: Any AI tool used to evaluate job applicants, including:
- Video interview analysis
- Resume screening
- Skills assessments
- Candidate ranking systems
Requirements:
- ✓ Disclosure: Written notice explaining AI use, what it evaluates, and how it impacts decisions
- ✓ Consent: Affirmative opt-in required before AI evaluation
- ✓ Alternative process: Must offer non-AI evaluation for candidates who decline
- ✓ Data deletion: Destroy video/data within 30 days upon request
Penalties: $500 per violation (first offense), $1,000 per violation (subsequent)
Enforcement: Illinois Department of Labor
New York City - Local Law 144
Effective: July 5, 2023
Scope: Automated Employment Decision Tools (AEDTs) that substantially assist or replace discretionary decision-making in hiring or promotion
Requirements:
- ✓ Bias audit: Annual independent audit required
- ✓ Audit publication: Results must be posted publicly on company website
- ✓ Disclosure: Notice to candidates at least 10 days before AEDT use
- ✓ Alternative process: Available upon request
- ✓ Data retention policy: Published explanation of what data is collected and retained
Penalties: $500 to $1,500 per violation (each day of non-compliance = separate violation)
Enforcement: NYC Department of Consumer and Worker Protection (DCWP)
Notable: NYC has issued multiple enforcement actions. First penalties levied in Q4 2025.
Colorado - AI Act (HB 24-1278)
Effective: February 1, 2026
Scope: High-risk AI systems in employment, defined as systems that make or substantially influence consequential decisions about employment
Requirements:
- ✓ Impact assessment: Before deployment and annually thereafter
- ✓ Disclosure: Clear notice to candidates and employees
- ✓ Opt-out rights: Candidates can request non-AI alternative
- ✓ Human review: Meaningful human oversight of automated decisions
- ✓ Algorithmic accountability: Annual report to Colorado AG
- ✓ Risk management: Documented policies for AI governance
Penalties: Up to $20,000 per violation
Enforcement: Colorado Attorney General
California - AB 2930
Effective: January 1, 2026
Scope: Automated decision systems that screen, evaluate, or rank job candidates
Requirements:
- ✓ Pre-use disclosure: Before AI tool is used
- ✓ Bias testing: Annual evaluation for discriminatory impact
- ✓ Data minimization: Collect only job-relevant data
- ✓ Human review rights: Candidates can request human re-evaluation
- ✓ Privacy protections: CCPA-style data handling requirements
Penalties: CCPA-style enforcement (AG can seek civil penalties)
Enforcement: California Attorney General
Washington - SB 5116
Effective: March 31, 2024
Scope: Automated employment decision systems
Requirements:
- ✓ Disclosure: Notice of AI use
- ✓ Impact assessment: For high-risk systems
- ✓ Data protection: Algorithmic discrimination protections
Penalties: Consumer protection enforcement remedies
Tier 2: Targeted Regulation States
These states have narrower AI hiring laws, typically focused on specific technologies or limited requirements:
Maryland - HB 1202
Scope: Facial recognition in interviews only
Requirement: Written consent
Effective: October 1, 2020
New Jersey - A1 Bill (Pending final rules)
Scope: AI hiring tools that evaluate or rank candidates
Requirements: Disclosure + annual bias audit
Expected effective: 2026
Texas - No specific AI hiring law
However: Texas Capture or Use of Biometric Identifier Act (CUBI) applies to facial recognition and other biometric data in hiring
Requirement: Notice and consent for biometric data collection
Nevada - SB 370
Scope: General AI consumer protection law with employment provisions
Requirements: Disclosure of AI use; right to opt out
Effective: October 1, 2025
Massachusetts - S.2016 (Pending)
Proposed scope: Any AI that materially influences hiring decisions
Proposed requirements: Disclosure, bias audits, impact assessments, human oversight
Status: Committee review; expected vote Q2 2026
Oregon - HB 2557 (Pending)
Proposed scope: Automated employment systems
Proposed requirements: Notice, consent, audit rights
Connecticut - SB 1103 (Proposed)
Proposed scope: AI decision-making in employment
Proposed requirements: Impact assessments, disclosure, accountability mechanisms
The Complete Comparison Table
| State/City | Disclosure | Consent | Bias Audit | Impact Assessment | Alt. Process | Max Penalty |
|---|---|---|---|---|---|---|
| Illinois | ✓ | ✓ | — | — | ✓ | $1,000/violation |
| NYC | ✓ | — | ✓ Annual | — | ✓ | $1,500/day |
| Colorado | ✓ | ✓ | — | ✓ | ✓ | $20,000/violation |
| California | ✓ | — | ✓ Annual | — | ✓ | AG discretion |
| Washington | ✓ | — | — | ✓ | — | Consumer protection |
| Maryland | — | ✓ (facial only) | — | — | — | Not specified |
| Nevada | ✓ | — | — | — | ✓ | $5,000/violation |
| New Jersey | ✓ | — | ✓ | — | — | Pending rules |
Municipal Laws: Beyond State Regulations
Several cities have passed AI hiring laws more stringent than their state requirements. Key jurisdictions:
New York City (covered above)
Most comprehensive municipal AI hiring law in the U.S.
San Francisco - Surveillance Technology Ordinance
While not AI-hiring specific, SF's surveillance ordinance impacts employers using monitoring or analysis tools in hiring. Requires:
- Impact assessments for surveillance technology
- Public disclosure of technology use
- Oversight and accountability mechanisms
Portland, OR - Facial Recognition Ban
Portland prohibits private entities (including employers) from using facial recognition in places of public accommodation. Arguably extends to job interviews conducted in public-facing offices.
Multi-State Compliance Strategy
If you hire across multiple states, you need a unified compliance approach that satisfies all jurisdictions. Here's how to build it:
Strategy 1: Build to the Highest Standard
Identify the most demanding requirements across all states where you hire, then implement those everywhere. This "ceiling" approach ensures you're never out of compliance.
Example: If you hire in both NYC and Illinois:
- Conduct annual bias audits (NYC requirement) → applies to all tools
- Obtain explicit consent (Illinois requirement) → collect from all candidates
- Provide alternative process (both require) → offer universally
- Publish audit results (NYC requirement) → makes sense to do once, publicly
Result: You're simultaneously compliant in both jurisdictions with a single process.
Strategy 2: Tool-Specific Compliance Mapping
Different AI tools may have different compliance requirements. Create a matrix:
| Tool | Use Case | States Where Used | Compliance Requirements |
|---|---|---|---|
| HireVue | Video interviews | IL, NY, CA, TX | Bias audit (NYC), consent (IL), disclosure (all) |
| Greenhouse AI | Resume screening | IL, NY, CO | Bias audit (NYC), consent (IL, CO), impact assessment (CO) |
| Pymetrics | Skills assessment | NY, CA | Bias audit (NYC), bias testing (CA), disclosure (both) |
Strategy 3: Geographic Segmentation
For very large employers, it may make sense to use different hiring workflows in different states—especially if certain AI tools are only cost-effective at scale.
Example:
- High-volume roles in regulated states: Use compliant AI tools with full bias audits
- Low-volume roles in non-regulated states: Consider traditional (non-AI) hiring to avoid compliance costs
- Remote roles: Default to most stringent compliance (assume candidates could be anywhere)
Caution: This approach requires sophisticated workflow management and clear policies to prevent mistakes.
Pending Federal Legislation
Congress is debating several bills that could establish national AI hiring standards. Key proposals:
Algorithmic Accountability Act (S. 2892)
Status: Committee review
Key provisions:
- Mandatory impact assessments for high-risk AI systems
- Annual reporting to FTC
- Algorithmic discrimination protections
- Consumer notification rights
AI Bill of Rights Implementation Act
Status: Introduced 2025
Key provisions:
- Right to notice when AI is used in consequential decisions
- Right to opt out of automated decision-making
- Right to challenge and appeal AI decisions
- Protections against algorithmic discrimination
Preemption Questions
If federal AI hiring legislation passes, it may:
- Preempt state laws entirely (unlikely given political dynamics)
- Create a federal floor with state flexibility (most likely outcome)
- Coexist with state laws (creating additional compliance layers)
Compliance implication: Don't assume federal law will simplify multi-state compliance. Plan for continued state-level variation.
Enforcement Trends Across States
Active Enforcement
New York City: First penalties issued Q4 2025; multiple investigations ongoing
Illinois: Pattern-and-practice investigations of staffing agencies
California: AG investigating major ATS vendors
Complaint-Driven Enforcement
Most states rely on complaints to trigger investigations. Common complaint sources:
- Candidates who believe they were unfairly screened out
- Whistleblowers (employees who see non-compliant practices)
- Media investigations uncovering AI tool issues
- Advocacy groups testing compliance systematically
Cross-Jurisdictional Collaboration
State AGs are increasingly coordinating AI enforcement. A violation in one state can trigger scrutiny in others where you operate.
International Considerations
If you hire in both the U.S. and internationally, consider:
EU AI Act
AI hiring tools are "high-risk" under the EU AI Act. Requirements include:
- Conformity assessments
- Human oversight
- Transparency obligations
- Record-keeping
Penalties: Up to €30M or 6% of global revenue
Canada - AIDA (Artificial Intelligence and Data Act)
Pending legislation would create impact assessment and transparency requirements for AI in employment.
UK - AI Regulation Bill (Proposed)
Sector-specific approach with employment provisions expected.
Common Multi-State Pitfalls
❌ Assuming state boundaries matter for remote roles
The problem: "Our company is based in Texas, so we don't need to worry about NYC law."
The reality: If you hire a candidate located in NYC, LL144 applies—regardless of where your company is headquartered.
❌ One-size-fits-all disclosure
The problem: Using generic "AI may be used" language everywhere.
The reality: Disclosure requirements vary significantly (timing, specificity, format). Build disclosures that satisfy the most demanding standard.
❌ Ignoring municipal laws
The problem: Tracking only state laws and missing city-specific requirements.
The reality: NYC, SF, Portland, and others have requirements beyond their states. Your compliance program must track municipal laws.
❌ Delayed compliance for new states
The problem: "We'll deal with Massachusetts when their law passes."
The reality: Laws often have short implementation windows. Waiting until passage creates rushed, incomplete compliance.
How EmployArmor Simplifies Multi-State Compliance
Managing 17+ state laws manually is impossible at scale. EmployArmor provides:
- Jurisdictional intelligence: We map your hiring footprint to every applicable law (state and municipal)
- Compliance crosswalk: Identify overlapping requirements and build unified processes
- Tool-specific guidance: Map each AI tool to relevant compliance obligations
- Regulatory monitoring: Real-time alerts when new laws pass or existing laws change
- Multi-state disclosure generation: Create compliant disclosures that work everywhere
- Audit coordination: Manage bias audits across multiple jurisdictions
Hiring in multiple states?
Get Your Multi-State Compliance Assessment →Frequently Asked Questions
If we operate in a state without an AI hiring law, are we exempt from all requirements?
Not necessarily. Federal anti-discrimination law (Title VII, ADA, ADEA) applies everywhere. The EEOC's position is that AI tools must comply with existing civil rights protections regardless of state-specific AI laws. You're never exempt from ensuring your AI tools don't discriminate.
Do state AI hiring laws apply to contract workers and gig workers?
Most laws define applicability by "employment" or "hiring," which may or may not include contractors depending on state employment law definitions. Illinois AIVIA, for example, covers "applicants for employment"—a term with established legal meaning. When in doubt, apply AI hiring law protections to all worker classifications.
What if we use an AI tool provided by a recruiting agency?
You're still responsible for compliance. The fact that a third party (recruiter or vendor) provides the AI tool doesn't shift legal liability away from you as the employer. You must ensure any agency you work with uses compliant AI tools.
Can we use different AI tools in different states to simplify compliance?
Technically yes, but operationally complex. You'd need to track candidate location, route them to appropriate workflows, and maintain multiple vendor relationships. For most employers, it's simpler to use compliant tools everywhere and absorb the higher cost as the price of national hiring.
How often do these laws change?
Constantly. In 2025 alone, 8 states amended or passed new AI hiring laws. Expect ongoing legislative activity for the next 3-5 years as jurisdictions respond to enforcement experiences and new AI capabilities. You need a system for tracking changes, not a one-time compliance checklist.
If our company is headquartered in one state but hires remotely nationwide, which law applies?
Generally, the law of the candidate's location applies, not your headquarters location. If you're based in Texas (no AI hiring law) but hire a candidate in Colorado, Colorado AI Act requirements apply to that hire. For remote positions, you must comply with the laws of every state where candidates may work. This is why many employers adopt "highest common denominator" compliance—build to the strictest requirements (Colorado + California + NYC + Illinois combined) and apply nationally. See our Compliance Program Guide for multi-state implementation strategies.
Are there any states considering but not yet enforcing AI hiring laws that we should prepare for?
Yes. As of February 2026, pending legislation includes: Massachusetts (HB 2127, likely effective 2027), Washington (SB 5116, under committee review), Michigan (AI Workforce Fairness Act, introduced), Pennsylvania (HB 892, passed House, awaiting Senate), New Jersey (A3908, hearing scheduled), and Minnesota (AI Employment Act, expected 2027). Additionally, Oregon, Connecticut, Rhode Island, and Vermont have task forces studying AI hiring regulation with recommendations expected in 2026. If you hire in these states, monitor legislative developments and consider implementing compliance measures proactively—being ahead of the law reduces scrambling when it passes.
2026 Legislative Updates
States Strengthening Existing Laws
- Illinois (HB 4211, effective July 2026): Expands AIVIA to cover internal promotions explicitly, adds requirement for employers to maintain audit trail of AI decisions, and increases penalties to $2,000 per violation.
- California (AB 3104, effective Jan 2027): Creates private right of action for ADMT violations, allowing candidates to sue directly rather than relying on AG enforcement. Expected to trigger litigation wave similar to CCPA/CPRA enforcement patterns.
- NYC (proposed Local Law amendment, pending): Would require employers to provide candidates with their individual AEDT scores upon request, not just aggregate audit results. Highly controversial among employers due to transparency concerns.
New State Entrants
- Massachusetts: Comprehensive AI hiring law modeled after Colorado's approach, expected passage Q2 2026.
- Washington: Focused on public sector AI hiring initially, with private sector provisions likely in 2027.
- Pennsylvania: Would be first Midwest state outside Illinois with AI hiring regulation, signals regional trend.
Related Resources
- Complete AI Hiring Compliance Guide 2026
- Illinois AIVIA Deep Dive
- Maryland HB 1202 Guide
- Federal AI Hiring Landscape
Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.