Amazon's AI Resume Screening Tool: The Scandal That Changed AI Hiring Law
Amazon's AI resume screening tool—built between 2014 and 2018—systematically discriminated against female job applicants. The algorithm trained on a decade of Amazon's own resumes, which were disproportionately submitted by men. As a result, the model learned to downgrade resumes containing the word "women's" and penalized graduates of all-women's colleges. Amazon scrapped the tool quietly in 2018 before it was ever used in official hiring decisions—but the damage to public trust, and the legal landscape, was already done.
According to Reuters, which broke the story in October 2018, the tool was developed by Amazon's machine learning team and was designed to rate candidates from one to five stars. It was never used officially, but the disclosure sparked global scrutiny of AI hiring systems.
This case established the template for AI hiring liability that regulators now enforce across the United States. Understanding what Amazon got wrong—and what the law now requires—is essential for any employer using AI tools in recruitment. For a running list of similar cases, see our AI hiring lawsuits tracker.
What Happened: The Amazon AI Bias Timeline
2014: Development Begins
Amazon's machine learning team builds an automated candidate screening tool, training it on resumes submitted to the company over the previous 10 years. Because tech roles at Amazon were historically male-dominated, roughly 60–70% of training resumes came from men.
2015–2017: Bias Discovered Internally
Engineers testing the model notice it consistently penalizes resumes that include women-signaling language. Words like "women's" (as in "women's chess club" or "women's leadership conference") trigger lower scores. Resumes from graduates of all-women's colleges are systematically downgraded. Amazon's team attempts multiple fixes, but the bias persists.
2018: Tool Scrapped, Story Breaks
Amazon dissolves the team and abandons the project. In October 2018, Reuters publishes a detailed investigation revealing the tool's discriminatory behavior. The story goes global.
2019–2022: Legislative Fallout
The Amazon disclosure accelerates legislative action. New York City passes NYC Local Law 144 (effective 2023), requiring bias audits for AI hiring tools. Illinois passes the Illinois AI hiring law (2020). The EEOC launches an AI and Algorithmic Fairness initiative in 2021, publishing guidance in 2023.
2023–2026: Enforcement Era Begins
The EEOC files its first AI-related hiring discrimination cases. The OFCCP and DOJ settle with DHI Group (Dice.com) over discriminatory job ad targeting. Employers face active enforcement risk for any AI tool that produces discriminatory outcomes—regardless of intent.
The Legal Framework: Why Amazon's Tool Was Unlawful
Amazon never faced direct litigation over the tool because it was never officially deployed. But had it been, here's the legal exposure:
Title VII of the Civil Rights Act (42 U.S.C. § 2000e-2)
Title VII prohibits employment discrimination based on sex, among other protected characteristics. Critically, it covers both disparate treatment (intentional discrimination) and disparate impact (facially neutral practices that disproportionately harm a protected class).
Amazon's algorithm produced disparate impact against women: female candidates were systematically scored lower than equally qualified male candidates. Under Griggs v. Duke Power Co. (401 U.S. 424, 1971), the Supreme Court established that employment practices with discriminatory effects violate Title VII even without discriminatory intent.
Penalty exposure: Title VII violations can result in back pay, front pay, compensatory and punitive damages up to $300,000 per individual (for employers with 500+ employees), plus attorney's fees and injunctive relief.
EEOC Technical Assistance on AI (2023)
In May 2023, the EEOC released formal guidance clarifying that employers are liable for discriminatory AI tools—even tools developed or operated by third-party vendors. Under the "employer responsibility" doctrine, you cannot outsource away liability by blaming your HR software vendor.
According to the EEOC's 2023 guidance: "An employer may be held liable under Title VII if it uses an algorithmic decision-making tool that discriminates against applicants or employees based on a protected characteristic—even if the tool was developed by a third party."
Executive Order 13985 and Federal Contractor Requirements
For federal contractors and subcontractors, Executive Order 13985 (January 2021) on Advancing Racial Equity reinforced that algorithmic tools must not perpetuate systemic discrimination. The OFCCP now routinely examines AI hiring tools during compliance audits. Additionally, the FTC has issued guidance on algorithmic bias and unfair trade practices that apply to AI vendors operating in the hiring space.
What Makes an AI Hiring Tool Legally Risky
Based on regulatory guidance and enforcement patterns, here are the specific characteristics that put AI tools in legal jeopardy:
| Risk Factor | Why It's Dangerous | Legal Standard |
|---|---|---|
| Training on historical data | Encodes past bias into future decisions | Disparate impact under Title VII |
| No bias audit | Cannot demonstrate tool is lawful | NYC LL 144, EEOC guidance |
| Black-box scoring | Inability to explain adverse decisions | FCRA adverse action requirements |
| No human override | Fully automated decisions on protected classes | EU AI Act (affects multinationals) |
| Vendor-developed without audit | Employer still liable for outcomes | EEOC 2023 guidance |
State and Local AI Hiring Laws in 2026
The Amazon case accelerated a wave of state legislation that now governs AI in hiring. Review the full AI hiring laws by state for jurisdiction-specific requirements.
New York City Local Law 144 (Effective July 2023)
NYC Local Law 144 requires employers using "automated employment decision tools" (AEDTs) to:
- Conduct annual bias audits by independent third parties
- Publish audit results publicly
- Notify candidates that AI tools are being used
- Provide alternative selection processes upon request
Penalty: $375–$1,500 per violation per day.
Illinois Artificial Intelligence Video Interview Act (820 ILCS 42)
The Illinois AI hiring law requires employers using AI to analyze video interviews to:
- Notify candidates before the interview
- Explain how the AI works
- Obtain consent
- Delete recordings within 30 days upon request
Penalty: Up to $1,000 per violation, plus attorney's fees.
Maryland and Washington
Both states have enacted laws requiring disclosure when AI is used in employment screening. Legislation is pending in California, Colorado, and Texas as of March 2026.
Federal Proposals
The Algorithmic Accountability Act (proposed 2022, reintroduced 2024) would require companies to assess automated decision systems for bias and accuracy. While not yet law, it signals the trajectory of federal regulation.
Lessons for Employers: How to Use AI Hiring Tools Lawfully
1. Audit Before You Deploy
Before implementing any AI hiring tool—whether built internally or purchased from a vendor—use our AI hiring compliance checklist and conduct an independent bias audit. Examine the tool's outputs across demographic groups. If women, minorities, or candidates over 40 are scored lower for equivalent qualifications, the tool has a disparate impact problem.
According to a 2024 study by Harvard Business Review, only 27% of employers using AI in hiring had conducted any form of bias audit. This is the single biggest compliance gap in AI hiring today.
2. Demand Vendor Transparency
Your HR software vendor cannot absorb your legal liability. Ask vendors:
- What data was this tool trained on?
- Has the tool been audited for bias?
- Can you provide demographic outcome data?
- What is your process if bias is discovered?
If vendors refuse to answer these questions, that is itself a red flag.
3. Maintain Human Oversight
The EEOC's guidance specifically warns against fully automated hiring decisions. Human reviewers should be able to override algorithmic recommendations—and there should be documented processes for doing so. This is not just a legal requirement in some jurisdictions; it's the difference between a bias audit that can be corrected and one that cannot.
4. Document Everything
If you are ever audited or sued, you will need to demonstrate:
- What tool you used, and when
- How candidates were informed
- Whether you conducted bias testing
- What your human review process looked like
- How adverse action decisions were made
Under the Fair Credit Reporting Act (15 U.S.C. § 1681 et seq.), if you use background check data in an automated decision, you must provide adverse action notices with the specific reasons for rejection.
5. Check Local Compliance Requirements
If you hire in New York City, you must comply with NYC Local Law 144 today. Additional state laws are coming. A single national hiring process may need to be adapted for local requirements.
Check your state's AI hiring law requirements →
The EEOC's 2023 AI Guidance: What Employers Must Know
The EEOC's May 2023 publication, "Artificial Intelligence and Algorithmic Fairness", is the most important regulatory document for employers using AI in hiring. Key points:
- Employer liability is non-delegable. You cannot blame your vendor.
- Disparate impact applies. Intent is irrelevant; outcomes are what matter.
- Resume screening tools are covered. This includes keyword filters, scoring algorithms, and any automated ranking system.
- Testing obligations exist. Employers should regularly test tools for adverse impact.
- Reasonable accommodations apply. Candidates with disabilities who need alternative assessment methods must be accommodated.
Comparison: AI Hiring Liability by Jurisdiction
| Jurisdiction | Key Law | Requirement | Max Penalty |
|---|---|---|---|
| New York City | Local Law 144 | Annual bias audit + disclosure | $1,500/day |
| Illinois | AI Video Interview Act (820 ILCS 42) | Consent + deletion rights | $1,000/violation |
| Federal | Title VII (42 U.S.C. § 2000e) | No disparate impact | $300,000/claimant |
| Federal (contractors) | OFCCP regs | Affirmative action + audit | Debarment |
| Maryland | HB 1202 | Disclosure required | $500–$10,000 |
Frequently Asked Questions
Was Amazon sued for its AI resume tool?
Amazon was not sued directly over the AI resume tool because the tool was never officially deployed in hiring decisions. However, the disclosure of the tool's discriminatory behavior triggered global regulatory scrutiny and accelerated legislation governing AI in hiring. Amazon's case is routinely cited in EEOC enforcement guidance as an example of the risks of AI hiring tools built on biased training data.
Can employers use AI to screen resumes legally?
Yes, but with significant compliance obligations. Employers must ensure AI screening tools do not produce disparate impact against protected classes under Title VII (42 U.S.C. § 2000e). In jurisdictions like New York City, employers must conduct annual independent bias audits and disclose to candidates when AI tools are used. The EEOC's 2023 guidance clarifies that employers remain liable for their AI tools' discriminatory outcomes even if the tools were developed by third-party vendors.
What is a bias audit for AI hiring tools?
A bias audit is an independent statistical analysis of an AI tool's outcomes to determine whether the tool produces discriminatory results for protected groups. NYC Local Law 144 requires that these audits be conducted by independent third parties, published publicly, and updated annually. A bias audit examines whether candidates from different demographic groups—by race, gender, age, and other protected characteristics—receive meaningfully different scores from the AI tool. See the NYC CCHR's guidance on LL 144 for the official audit standards.
What does "disparate impact" mean in AI hiring?
Disparate impact means that a facially neutral employment practice (like an AI scoring algorithm) disproportionately harms members of a protected class. Under Title VII and Griggs v. Duke Power Co. (401 U.S. 424, 1971), employers can be liable for practices that produce discriminatory outcomes even without discriminatory intent. If your AI tool scores women 30% lower than men for equivalent qualifications, that is disparate impact—and it is unlawful.
Do AI hiring laws apply to small businesses?
Most federal employment discrimination laws apply to employers with 15 or more employees. NYC Local Law 144 applies to employers who use AEDTs and employ workers in New York City, regardless of company size. The Illinois AI Video Interview Act applies to employers of any size using AI video analysis in Illinois. Small businesses should review the specific threshold requirements for each applicable jurisdiction using our compliance FAQ.
What should we do if our AI vendor is audited or changes its tool?
First, contractually require your vendor to notify you of any changes to their algorithm, any bias audit results, or any regulatory actions. Second, conduct your own independent outcome testing annually. Third, maintain records of all communications with vendors about the tool's design and testing. You cannot assume the vendor's compliance program is sufficient to protect your organization.
How does the FCRA apply to AI hiring tools?
The Fair Credit Reporting Act (15 U.S.C. § 1681) governs background checks and consumer reports used in employment decisions. If your AI tool incorporates background check data or uses third-party consumer data, adverse action notices may be required before rejecting a candidate. The FCRA requires employers to: (1) notify the candidate of the adverse action, (2) provide the name of the consumer reporting agency, and (3) advise the candidate of their right to dispute the information.
Is the EU AI Act relevant for US employers?
Yes, if you employ or recruit candidates in the European Union. The EU AI Act (effective 2026) classifies AI systems used in employment as "high-risk" and imposes strict transparency, documentation, and human oversight requirements. US-headquartered multinationals with operations in the EU must comply, and compliance requirements may influence how vendors build their tools globally.
Key Takeaways
-
Amazon's AI hiring tool was biased against women because it was trained on historical data that reflected a male-dominated workforce—a problem that applies to any AI trained on past hiring decisions.
-
Employers are legally liable for discriminatory AI tools under Title VII (42 U.S.C. § 2000e), even if the tool was built by a third-party vendor and never formally deployed.
-
Disparate impact is the key legal standard. You don't need discriminatory intent to violate employment law—discriminatory outcomes are sufficient.
-
Bias audits are now legally required in New York City and strongly recommended everywhere. The EEOC expects employers to test their AI tools for discriminatory impact.
-
Document your compliance program. In the event of a lawsuit or audit, your ability to show what testing you did, what your human review process looked like, and how you notified candidates will be determinative.
-
State laws are multiplying. Illinois, Maryland, and New York City have enacted laws; California, Colorado, and Texas are close behind. Review our AI hiring laws by state and your state employment compliance obligations regularly.
EmployArmor monitors AI hiring compliance requirements in every U.S. jurisdiction, alerts you when new laws apply to your hiring process, and generates the documentation you need to demonstrate compliance. Get your free compliance assessment →
Last updated: March 2026. This content is for informational purposes only and does not constitute legal advice. Consult an employment attorney for guidance specific to your situation.