Law Update

New Laws That Went Into Effect

Washington State's AI Employment Standards Act, passed in late 2025, became fully enforceable on April 1, 2026. This landmark legislation sets a high bar for AI use in hiring, emphasizing transparency

EmployArmor Legal Team

New Laws That Went Into Effect

Washington State: AI Employment Standards Act (Effective April 1, 2026)

Washington State's AI Employment Standards Act, passed in late 2025, became fully enforceable on April 1, 2026. This landmark legislation sets a high bar for AI use in hiring, emphasizing transparency, fairness, and candidate rights. Employers using AI tools for recruitment, screening, or decision-making in Washington must now comply with stringent requirements to avoid penalties.

Key requirements include:

  • Impact assessments required: Before deploying high-risk AI systems—such as resume screeners, predictive scoring tools, or automated interviewers—employers must conduct and publicly disclose algorithmic impact assessments. These assessments evaluate potential biases across protected characteristics like race, gender, age, and disability.
  • Disclosure mandate: Candidates must be notified at least 15 days before AI evaluates their application, resume, or interview performance. Disclosures must detail the AI's purpose, data inputs, and decision-making influence.
  • Opt-out rights: Candidates can request evaluation without AI, and employers must provide a human-reviewed alternative process.
  • Human oversight: Any automated hiring decision requires meaningful human intervention, including the ability to override AI recommendations with documented justification.

Who it applies to: Any employer with Washington-based employees or candidates, regardless of company size or AI vendor used. This includes remote workers residing in Washington.

Penalties: Fines up to $10,000 per violation, plus a private right of action allowing candidates to sue for damages, injunctive relief, and attorney's fees.

Compliance deadline: Immediate—already in effect since April 1, 2026. Employers who delayed audits or disclosures face retroactive liability.

This law has already prompted dozens of compliance filings and is influencing neighboring states like Oregon and Idaho.

Oregon: AI Hiring Transparency Law (Effective May 1, 2026)

Oregon's AI Hiring Transparency Law took effect on May 1, 2026, focusing primarily on disclosure and data privacy rather than full audits. It's less burdensome than Washington's but still requires proactive steps for AI users.

Key provisions:

  • Disclosure required: Employers must inform candidates before AI use about what data the AI evaluates (e.g., keywords in resumes, tone in responses) and how it influences decisions (e.g., ranking or pass/fail).
  • Data retention limits: AI-analyzed candidate data cannot be retained longer than 2 years without explicit consent. This applies to resumes, video footage, and derived scores.
  • Right to explanation: Candidates can request a limited explanation of their AI evaluation, including key factors that led to advancement or rejection.

Penalties: $500 for the first violation per candidate, escalating to $1,000 for subsequent violations. The Oregon Bureau of Labor and Industries enforces via complaints.

This law complements federal EEOC guidelines and is particularly relevant for multistate employers hiring in the Pacific Northwest.

Minnesota's Video Interview AI Consent Act, effective June 15, 2026, mirrors Illinois' BIPA-style requirements but targets only AI analysis of video interviews. It does not cover resume screening or chatbots.

Requirements:

  • Written consent required: Explicit written consent must be obtained before AI analyzes video for traits like facial expressions, speech patterns, or body language.
  • Explanation mandate: Consent forms must explain what the AI evaluates (e.g., "sentiment analysis, eye contact") and potential decision impacts.
  • Data deletion: Upon request, delete video footage and AI outputs within 30 days, with proof of deletion provided.

Scope: Limited to video interview AI; other tools exempt.

This law addresses privacy concerns around biometric-like analysis and has spurred vendors to update consent flows.

Major Legislation Passed (Not Yet Effective)

Massachusetts: Algorithmic Accountability and Fairness Act (Passed May 15, 2026; Effective January 1, 2027)

Massachusetts made history on May 15, 2026, by passing the nation's most comprehensive AI employment law. Effective January 1, 2027, it imposes rigorous standards modeled on EU AI Act principles but tailored to U.S. hiring.

When effective, it mandates:

  • Pre-deployment testing: Bias audits required before AI deployment, in addition to annual renewals. Audits must use diverse datasets reflecting Massachusetts demographics.
  • Intersectional analysis: Beyond single protected classes (e.g., race or gender), audits must test intersections (e.g., Black women, LGBTQ+ veterans) to uncover compounded biases.
  • Explainability: Candidates rejected by AI can request a personalized explanation, including top scoring factors and alternatives considered.
  • Independent auditor registry: The state will certify auditors; only they can perform compliant audits.
  • Private right of action: Violations allow lawsuits for up to $25,000 per incident, plus actual damages.

Significance: As the strictest U.S. AI hiring law, Massachusetts sets a template for blue states like New York and California expansions. National vendors must adapt.

Compliance preparation: Massachusetts employers should initiate audits immediately—processes take 2-4 months. Non-MA employers: Prepare for copycat laws.

Federal Developments

AI Accountability Act Advances (June 12, 2026)

On June 12, 2026, the federal AI Accountability Act cleared committee and advanced to the House floor. If passed, it would establish nationwide standards, potentially preempting lighter state laws.

Key features:

  • National AI hiring standards, creating a compliance floor.
  • Annual bias audits for all employers (no small-business exemption).
  • Mandatory candidate disclosures.
  • Penalties up to $50,000 per violation.
  • FTC as primary enforcer, with EEOC support for discrimination claims.

Status: Strong House momentum; Senate filibuster risks.

Earliest effective date: Mid-2027.

Impact: Universal coverage ends "state patchwork" excuses, but stricter states remain additive.

EEOC Updated Guidance (June 20, 2026)

The EEOC issued pivotal guidance on June 20, 2026, clarifying AI under Title VII and ADA:

  • Vendor liability: Employers liable for third-party AI discrimination; "we trusted the vendor" is no defense.
  • Intersectionality: Recommended (not mandated) in audits.
  • Disability accommodation: AI must accommodate (e.g., text alternatives for vision-impaired).
  • Validation: AI must validate like pencil-and-paper tests per Uniform Guidelines on Employee Selection Procedures.

"Employers cannot outsource their responsibility for non-discrimination. If a vendor's AI tool produces discriminatory outcomes, the employer—not the vendor—bears primary legal liability."

This non-binding guidance influences courts and agencies.

Enforcement Actions and Settlements

First Major Class-Action Settlement: $4.5 Million (May 28, 2026)

A retailer settled for $4.5 million on May 28, 2026, in the first big AI hiring class action. Allegations:

  • Facial/eye-tracking AI in videos.
  • No NYC bias audits.
  • Undisclosed AI use.
  • Denied accommodations.

Terms:

  • $4.5M to 3,200 plaintiffs (~$1,400 each).
  • Banned facial analysis.
  • 3-year monitoring.
  • Full compliance overhaul.

This settlement unlocks AI litigation floodgates.

NYC Enforcement Surge

NYC issued 47 Local Law 144 notices in Q2—exceeding 2025 totals.

Violations:

  • Bias audit failures (65%).
  • Poor disclosures (25%).
  • Ungoverned AI (10%).

Fines: $850,000+ aggregate.

California Attorney General Investigations

CA AG probed 6 employers under AB 2930:

  • 2 tech.
  • 2 healthcare.
  • 1 retail.
  • 1 finance.

Ongoing; expect settlements.

Court Decisions

Wilson v. TechCorp (9th Circuit, May 15, 2026)

Issue: Title VII liability sans intent?

Holding: Disparate impact strict; good-faith vendor reliance invalid.

Rodriguez v. Financial Services Inc. (S.D.N.Y., June 8, 2026)

Issue: Private suits for NYC LL144 audit failures?

Holding: Yes, direct lawsuits allowed.

Pending State Legislation (Likely to Pass in Q3/Q4)

High Probability

  • New Jersey: Transparency like Oregon.
  • Virginia: Video consent like Minnesota.
  • Michigan: Full accountability like Massachusetts.

Moderate Probability

  • Pennsylvania: Audits for contractors.
  • Georgia: Disclosures.
  • Arizona: Video limits.

What Employers Should Do Now

If You Hire in Washington, Oregon, or Minnesota

  • Update disclosures.
  • Minnesota: Consent forms.
  • Washington: Assessments.
  • Oregon: 2-year data purge.

If You Hire in Massachusetts

  • Pre-deploy audits.
  • Auditor selection.
  • Intersectional prep.
  • Explainability.

All Employers Using AI

  1. Vendor contracts: Indemnity + audits.
  2. EEOC alignment.
  3. Documentation.
  4. Federal prep.
  5. Q3 monitoring.

Looking Ahead: Q3 2026 Predictions

  • Enforcement ramps (NYC/CA/CO).
  • More class actions.
  • House passage.
  • 3-5 new laws.
  • Vendor exits.

How EmployArmor Keeps You Current

  • Real-time alerts.
  • Auto-updates.
  • Multi-state dashboard.
  • Enforcement intel.

Stay Ahead of Regulatory Changes
Real-time monitoring + automated compliance updates
Get Started →

Frequently Asked Questions

What were the major AI hiring law developments in Q2 2026?

Q2 2026 saw three new state laws go into effect (Washington on April 1, Oregon on May 1, and Minnesota on June 15), the first major class-action settlement ($4.5 million), federal AI Accountability Act advancing out of committee, and Massachusetts passing the strictest AI hiring law to date (effective January 2027). NYC also issued 47 violation notices—more than all of 2025 combined.

Should I start complying with Massachusetts law now even though it's not effective until January 2027?

If you hire in Massachusetts: yes, start preparing now. The law requires pre-deployment bias audits with intersectional analysis, which takes 2-4 months to complete. You'll also need to find an approved auditor from the state registry (launches Q4 2026). If you don't hire in MA, monitor developments as other states may adopt this model.

If federal AI hiring law passes, will it replace state laws?

The current federal AI Accountability Act includes partial preemption—it would set a national compliance floor but allow states to impose stricter requirements. This means you'd need to comply with federal law plus any stricter state laws in jurisdictions where you hire. It wouldn't eliminate state-by-state compliance complexity.

How worried should I be about class-action lawsuits after the $4.5M settlement?

Moderately concerned. The May 2026 settlement signals that plaintiffs' attorneys see AI hiring discrimination as a viable litigation target. Best defenses: maintain a strong compliance program, conduct regular bias audits, document all AI hiring decisions thoroughly, and avoid high-risk AI features like facial expression analysis and culture fit scoring.

What's the most common AI hiring compliance violation in 2026?

Failure to conduct annual bias audits accounts for 65% of NYC Local Law 144 violations issued in Q2 2026. Many employers either skip audits entirely or let them lapse past the 12-month renewal deadline. If you're in NYC, California, or other audit-required jurisdictions and haven't audited in 12+ months, this should be your top priority.

Last updated: June 30, 2026

Educational Resource: This guide is for informational purposes only and does not constitute legal advice. Requirements vary by jurisdiction, company size, and specific AI tool usage. Consult qualified legal counsel to determine your organization's specific obligations.

Ready to comply?

Get your personalized compliance assessment in 2 minutes — free.