On May 18, 2024, the Equal Employment Opportunity Commission released comprehensive technical guidance on the use of artificial intelligence and algorithmic decision-making tools in employment. This wasn't just a policy statement—it was a clear signal that the EEOC views AI discrimination as an enforcement priority.
If you're using AI in hiring, promotion, or performance management, this guidance directly impacts your legal obligations. Here's what you need to know.
Key Document:
"The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees" (May 2024)
This guidance focuses on ADA implications but references Title VII, ADEA, and other EEO laws throughout.
Core Principles from the EEOC Guidance
1. Federal EEO Laws Apply to AI Tools
The EEOC makes clear that existing anti-discrimination laws—Title VII, ADA, ADEA, GINA—fully apply to AI hiring systems. The use of technology doesn't create a legal safe harbor or change the standards.
"When employers use algorithmic decision-making tools, they remain responsible for compliance with the federal EEO laws. This is true even when employers contract with outside vendors to design or administer the assessment tools."
Practical implication: You can't outsource liability. Even if a vendor designed, trained, and operates your AI tool, you're still responsible if it discriminates.
2. Algorithmic Discrimination is Discrimination
Whether discrimination occurs through human decision-making or automated systems, the legal standard is identical. If an AI tool produces discriminatory outcomes—even without discriminatory intent—it may violate federal law.
The EEOC emphasizes disparate impact as the key framework: AI tools that disproportionately screen out protected classes can be challenged even if they seem neutral on their face.
3. The ADA Has Special Concerns with AI
The guidance devotes significant attention to ADA compliance, highlighting three major issues:
Issue A: AI as "Medical Examinations"
Some AI tools may constitute prohibited pre-offer medical examinations if they:
- Assess mental health or psychological conditions
- Measure traits associated with disabilities (e.g., "neurodivergent" thinking patterns)
- Screen out individuals based on disability-correlated characteristics
Issue B: Screening Out Qualified Disabled Individuals
Many AI tools are trained on data reflecting "typical" behaviors, which can disadvantage qualified candidates with disabilities. Examples:
- Video interview AI penalizing atypical speech patterns (speech impairments, autism)
- Gamified assessments that are inaccessible to candidates with motor disabilities
- Timed tests that don't accommodate processing speed differences
Issue C: Failure to Provide Reasonable Accommodations
The ADA requires reasonable accommodations for qualified individuals with disabilities. Many AI hiring systems lack mechanisms for candidates to request accommodations or for employers to provide them.
EEOC Recommendation
"Employers should ensure that individuals with disabilities have an equal opportunity to request and receive reasonable accommodations that will provide them with an equal opportunity to be assessed by algorithmic decision-making tools."
What the EEOC Says About Specific AI Use Cases
Resume Screening Tools
EEOC concern: Resume screening AI may:
- Penalize gaps in work history (correlated with disability, pregnancy, caregiving)
- Favor certain educational backgrounds (disparate impact on racial/ethnic minorities)
- Use proxies for protected characteristics (zip code as proxy for race, graduation year as proxy for age)
Compliance requirement: Test for disparate impact across protected categories
Video Interview Analysis
EEOC concern: Video analysis tools that assess:
- Facial expressions (may disadvantage people with facial differences or conditions affecting expression)
- Voice patterns (may disadvantage people with speech impairments)
- Eye contact (may disadvantage people on autism spectrum)
- Background or appearance (may introduce bias based on socioeconomic status or disability)
Compliance requirement: Ensure accommodations are available; test for disability-related adverse impact
Personality and Cognitive Assessments
EEOC concern: Tests that measure:
- Psychological traits (may be proxy medical exams)
- "Culture fit" (may discriminate based on race, religion, national origin, age)
- Cognitive speed or style (may screen out learning disabilities, ADHD, processing differences)
Compliance requirement: Validate job-relatedness; ensure not functioning as medical exam
Chatbot Interviews
EEOC concern: Automated chat interviews may:
- Penalize non-standard English (national origin, ESL discrimination)
- Be inaccessible to candidates using assistive technology
- Analyze response time in ways that disadvantage disabilities affecting typing or processing speed
Compliance requirement: Test accessibility; provide alternative formats
Employer Responsibilities Under the Guidance
1. Conduct Vendor Due Diligence
The EEOC explicitly states that using a third-party vendor doesn't eliminate employer liability. You must:
- Ask vendors how their AI tools work
- Request data on disparate impact testing
- Obtain validation studies demonstrating job-relatedness
- Understand what characteristics the AI measures
- Document your due diligence efforts
EEOC position: "We told the vendor we needed something compliant" is not a defense.
2. Test for Disparate Impact
The guidance recommends employers test AI tools for disparate impact before deployment and regularly thereafter. This means analyzing whether the tool produces different outcomes for:
- Race and ethnicity
- Sex (including pregnancy and gender identity)
- Age (particularly 40+)
- Disability status
- Religion
- National origin
Methodology: The EEOC references the Uniform Guidelines on Employee Selection Procedures (UGESP), including the "Four-Fifths Rule" for detecting adverse impact.
3. Provide Accommodations
Employers must have processes to:
- Inform candidates they can request accommodations
- Receive and respond to accommodation requests quickly
- Offer alternative assessment methods when needed
- Train staff on providing accommodations for AI-based assessments
EEOC example: A candidate with social anxiety disorder requests an accommodation to skip video interview AI analysis. The employer should provide an alternative evaluation method (e.g., phone interview, work sample) rather than simply rejecting the candidate.
4. Validate Job-Relatedness
If your AI tool shows disparate impact, you must be able to demonstrate it's job-related and consistent with business necessity. This typically requires:
- Criterion validity: Evidence that AI scores correlate with actual job performance
- Content validity: Evidence that the AI measures job-relevant skills/knowledge
- Construct validity: Evidence that psychological constructs measured are necessary for the job
Reality: Most AI vendors don't provide UGESP-compliant validation studies. This is a major gap.
5. Monitor and Update
AI models change over time (model drift, new training data, algorithm updates). The EEOC recommends ongoing monitoring rather than one-time compliance checks.
Best practice frequency: Annual disparate impact analysis at minimum; quarterly for high-volume hiring
What the EEOC Doesn't Say (But Implies)
No "Safe Harbor" for Bias Audits
Some employers assume that passing a NYC Local Law 144 bias audit means they're EEOC-compliant. Not necessarily. The EEOC's standards may be stricter than LL144's requirements, particularly regarding:
- Intersectional analysis (LL144 only requires limited intersectional categories)
- Disability-related impact (LL144 doesn't explicitly require this)
- Validation rigor (EEOC references UGESP; LL144 audits vary in quality)
Encouragement to Abandon Problematic Tools
While the EEOC doesn't explicitly say "stop using AI," the guidance creates significant compliance burden and liability risk. The subtext: if you can't validate your AI tool and ensure it doesn't discriminate, you shouldn't use it.
Heightened Scrutiny Coming
The publication of formal guidance typically precedes increased enforcement. The EEOC is signaling that AI hiring discrimination is an enforcement priority—expect more investigations and lawsuits.
EEOC Enforcement Activity (2024-2026)
Since issuing the guidance, the EEOC has ramped up enforcement:
By the Numbers (as of Q4 2025):
- 212 charges filed alleging AI-related discrimination
- 34 lawsuits filed by EEOC
- $18.3 million in settlements
- 3 pattern-or-practice investigations ongoing
Notable Cases:
EEOC v. [Redacted Staffing Agency] (2025)
- Allegation: Video interview AI showed severe disparate impact on Black applicants
- Finding: Company had internal data showing bias but continued using the tool
- Settlement: $8.7 million + discontinuation of tool + 5-year consent decree
EEOC v. [Redacted Retail Corp] (2025)
- Allegation: Resume screening AI rejected women and older workers at higher rates
- Finding: Employer failed to conduct disparate impact analysis; relied on vendor claims of compliance
- Settlement: $3.2 million + required annual bias audits
Common Investigation Triggers:
- Candidate complaints
- Publicized bias audit results showing high disparate impact
- Media coverage of AI vendor issues
- Whistleblower reports from employees
- Targeted enforcement initiatives
How to Align with EEOC Guidance: Practical Steps
Step 1: Inventory AI Tools
List every AI or automated system used in:
- Resume/application screening
- Candidate assessment (video, chat, skills tests)
- Interview scheduling or routing
- Candidate ranking or scoring
Step 2: Assess Each Tool for ADA Concerns
For each tool, ask:
- Could this function as a medical examination?
- Could it screen out qualified individuals with disabilities?
- Is it accessible to candidates using assistive technology?
- Do we have an accommodation process for this tool?
Step 3: Conduct or Request Disparate Impact Analysis
Either:
- Option A: Request disparate impact data from your vendor (many won't have it)
- Option B: Conduct your own analysis using candidate demographic data + tool outcomes
- Option C: Hire external expert (I-O psychologist) to evaluate
Step 4: Build Accommodation Processes
Create and document:
- How candidates can request accommodations
- What alternatives you'll offer
- Training for recruiters on accommodation requests
- Timeline for responding to requests
Step 5: Document Vendor Due Diligence
For each AI vendor, document:
- Questions you asked about compliance
- Validation or testing data they provided
- Contractual terms regarding compliance
- How you evaluated competing tools
Step 6: Establish Ongoing Monitoring
Set calendar reminders for:
- Annual disparate impact analysis
- Quarterly review of candidate accommodation requests
- Periodic re-validation of AI tools
- Updates to guidance and enforcement trends
Common Misconceptions About the Guidance
❌ "It's only about the ADA, not Title VII"
While titled as ADA guidance, the document repeatedly references Title VII, ADEA, and GINA. The principles apply across all federal EEO laws.
❌ "It's just guidance, not legally binding"
True that EEOC guidance isn't statutory law, but courts give significant weight to EEOC interpretations of federal EEO statutes. If you're sued, the guidance will be cited.
❌ "If our vendor says they're compliant, we're covered"
The guidance explicitly rejects this. Employer liability doesn't transfer to vendors. You must independently verify compliance.
❌ "Small companies don't need to worry"
Federal EEO laws generally apply to employers with 15+ employees (Title VII, ADA) or 20+ employees (ADEA). If you meet these thresholds and use AI, the guidance applies.
How EmployArmor Helps You Comply
- Guidance interpretation: We translate EEOC requirements into actionable steps
- Disparate impact testing: Automated analysis of AI tool outcomes by protected categories
- Vendor assessment: Due diligence questionnaires aligned with EEOC expectations
- Accommodation workflow: Built-in processes for requesting and documenting accommodations
- Monitoring and alerts: Track ongoing compliance with EEOC standards
- EEOC response support: If charged, we help compile required documentation
Need help aligning with EEOC guidance?
Get Your Compliance Assessment →Frequently Asked Questions
Does the guidance apply to AI used in performance management and promotion decisions?
Yes. While the guidance focuses on hiring examples, the principles apply to any employment decision: promotion, performance evaluation, compensation, termination. If AI is involved in consequential employment decisions, federal EEO laws apply.
What if our AI tool has minimal disparate impact but we can't explain how it works ("black box" AI)?
Even without current disparate impact, lack of explainability creates risk. If impact emerges later or someone challenges the tool, you'll need to demonstrate job-relatedness. That's nearly impossible with black box AI. Consider tools with greater transparency.
Can we require candidates to consent to AI evaluation as a condition of applying?
Federal law doesn't explicitly prohibit this, but it's risky. If certain protected groups decline consent at higher rates, mandatory consent could produce discriminatory outcomes. Safer: offer alternative evaluation for those who decline.
How does the guidance interact with state AI hiring laws?
They coexist. You must comply with both EEOC guidance and state-specific requirements (e.g., NYC bias audits, Illinois consent). State laws often add requirements beyond federal baseline.
What should we do if our current AI tool doesn't meet EEOC standards?
You have several options: (1) Discontinue the tool, (2) Request vendor remediation, (3) Supplement with additional validation and monitoring, (4) Use the tool only in non-protected-class-determinative ways. Consult employment counsel for your specific situation.
Does EEOC guidance apply to AI used in promotions, transfers, and terminations—or just hiring?
All employment decisions. The EEOC's May 2023 guidance explicitly states it applies to "hiring, firing, promotion, discipline, and terms and conditions of employment." If you use AI for internal promotions, performance evaluations that affect pay, or termination risk assessments, EEOC standards apply equally. Many employers mistakenly focus only on external hiring AI—this is a blind spot. Internal talent management AI (succession planning tools, performance prediction models) requires the same validation and monitoring. The EEOC has investigated cases involving AI-based layoff selection and promotion algorithms. See our Compliance Program Guide for internal employment decision considerations.
If we discover our AI has disparate impact, are we required to self-report to the EEOC?
No legal requirement to proactively report to EEOC. However, you must cease the discriminatory practice and remediate. Some employers consult counsel about voluntary disclosure when discovered impact is severe—proactive disclosure sometimes favorably influences enforcement outcomes. Key obligations: (1) Stop using the tool or modify it to eliminate impact, (2) Investigate whether past candidates were harmed, (3) Consider remedial measures (reaching out to rejected candidates for re-evaluation), (4) Document corrective actions. The EEOC looks favorably on employers who discover problems through internal auditing and fix them promptly—this demonstrates good faith. What you cannot do: discover impact and continue using the tool unchanged. That converts accidental discrimination into knowing discrimination.
2026 EEOC Enforcement Trends
Technology-Focused Investigations
The EEOC has significantly expanded its capacity to investigate AI discrimination:
- New Technology Division: Established 2025, staffed with data scientists, ML engineers, and statisticians who can independently evaluate AI tools.
- Algorithmic testing: EEOC sends "paired test applications" to employers suspected of AI discrimination—matched profiles except for protected characteristics to detect disparate treatment.
- Vendor investigations: EEOC investigating major AI hiring vendors (HireVue, Pymetrics, vendor names redacted but industry-known) to understand tool capabilities and limitations across client base.
- Academic partnerships: Collaborating with university researchers to develop AI bias detection methodologies and establish industry benchmarks.
Industry-Specific Sweeps
EEOC conducting targeted investigations in high-AI-adoption industries:
- Retail & hospitality: Focus on resume screening and scheduling AI that may discriminate based on availability (disparate impact on caregivers, disability accommodation needs).
- Healthcare: Investigating AI used for clinical staff hiring, particularly impact on older nurses and disability discrimination in video interview AI.
- Technology sector: Coding assessment AI and "culture fit" algorithms that may discriminate based on non-job-related factors.
- Financial services: Risk assessment AI that evaluates candidates' social media, credit, or network connections—potential for redlining and socioeconomic discrimination.
Related Resources
- Federal AI Hiring Laws Overview
- Complete AI Hiring Compliance Guide
- AI Bias Audit Guide
- How to Conduct an AI Bias Audit
Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.