One of the most common—and most critical—questions employers ask is: "Do I even use AI in hiring?" The answer might surprise you. Many everyday recruiting tools that don't explicitly market themselves as "artificial intelligence" still qualify as AI, automated decision-making technology (ADMT), or algorithmic employment tools under current regulations.
Misunderstanding this question has real consequences. Employers using AI without realizing it face compliance violations, penalties, and discrimination liability. Meanwhile, employers who mistakenly believe basic tools are "AI" waste resources on unnecessary compliance overhead. This guide cuts through the confusion.
🎯 Key Takeaway
If a system uses computation to substantially assist or automate employment decisions—scoring, filtering, ranking, or recommending candidates—it likely qualifies as AI under at least one regulatory framework, regardless of the underlying technology. Machine learning is sufficient but not necessary.
Legal Definitions: How Regulations Define AI
There is no single universal definition of "AI in hiring." Different jurisdictions use different terminology and capture different technologies. Understanding these variations is critical because you might be compliant in one state but non-compliant in another using the same tool.
New York City Local Law 144: "Automated Employment Decision Tool" (AEDT)
NYC's definition (Administrative Code §20-870) is the most technically specific:
"Any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions."
Key elements:
- "Computational process" — Broad enough to include traditional algorithms, not just modern AI
- "Machine learning, statistical modeling, data analytics, or artificial intelligence" — Explicitly includes ML/AI but also covers statistical and analytical approaches
- "Simplified output" — Scores, rankings, classifications, pass/fail decisions, recommendations
- "Substantially assist or replace" — Doesn't need to make the final decision; influencing human decisions qualifies
- "Employment decisions" — Hiring, promotion, or any selection decision for employment
Illinois HB 3773: "Artificial Intelligence"
Illinois (775 ILCS 5/2-108) defines AI for employment purposes as:
"A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Key elements:
- "Machine-based system" — Any computational system, not limited to advanced AI
- "Infers... how to generate outputs" — Systems that learn patterns or apply rules to produce decisions
- "Predictions, content, recommendations, or decisions" — Covers a wide range of outputs
- "Influence... environments" — Includes systems that affect real-world decisions (hiring, promotions, etc.)
California CCPA/CPRA: "Automated Decision-Making Technology" (ADMT)
California (Cal. Code Regs. Title 11, §7002(c)) focuses on privacy and consumer rights:
"Technology that processes personal information to make, or substantially facilitate human decision-making, of decisions that produce legal or similarly significant effects concerning a consumer."
Key elements:
- "Processes personal information" — Focuses on data processing aspect
- "Make or substantially facilitate" — Again, influence is enough; doesn't need sole authority
- "Legal or similarly significant effects" — Employment decisions clearly qualify as significant
- **Notably broad:** Includes rule-based systems, not just ML/AI
Colorado AI Act (SB24-205): "High-Risk Artificial Intelligence System"
Colorado takes a risk-based approach, regulating "high-risk" systems that pose substantial risk of algorithmic discrimination. For employment, any AI system used in hiring, promotion, or termination decisions is presumptively high-risk.
Federal: EEOC and DOL Guidance
The EEOC doesn't define "AI" in its guidance but makes clear that any selection procedure—whether algorithmic, AI-driven, or human—must comply with Title VII, the ADA, and ADEA. The EEOC's focus is on discriminatory outcomes, not the technology's classification.
The Practical Standard
Given these varying definitions, the safest operational standard is: If a system uses computation to filter, score, rank, evaluate, or recommend candidates—and those outputs influence your employment decisions—treat it as AI/ADMT under at least one regulatory framework.
Common Tools: What Qualifies and Why
✅ Almost Certainly Covered: High-Confidence AI Tools
These tools are universally recognized as AI/ADMT across all regulatory frameworks:
| Tool Type | Examples | Why It's AI | Compliance Required |
|---|---|---|---|
| Video Interview Analysis | HireVue, Spark Hire, myInterview, Modern Hire | Analyzes facial expressions, tone, word choice, speech patterns using ML/NLP | NYC bias audit, IL notice, CO assessment, CA disclosure/opt-out |
| AI Resume Screeners | Ideal, Pymetrics, Eightfold, Textio, Beamery | Automatically parses, scores, filters, and ranks candidates using ML models | All jurisdictions |
| Predictive Assessments | Pymetrics, Codility, HackerRank (AI scoring), Criteria | Uses algorithms/ML to predict job performance, culture fit, or likelihood of success | All jurisdictions |
| Chatbots & Virtual Recruiters | Olivia (Paradox), Mya, XOR, Phenom | Screens candidates through automated conversation; filters based on responses | All jurisdictions |
| Candidate Matching Engines | Eightfold, Phenom, SeekOut, HiredScore | Uses ML to match candidates to jobs based on skills, experience, and predicted fit | All jurisdictions |
⚠️ Context-Dependent: Requires Configuration Analysis
These tools may or may not qualify depending on how you configure and use them:
LinkedIn Recruiter
- AI Features (Covered): "Recommended Matches," "Likely to Engage" predictions, "Similar Candidates," AI-powered search ranking
- Non-AI Features (Not Covered): Manual boolean search, profile viewing, direct outreach without AI recommendations
- Verdict: If you rely on LinkedIn's AI recommendations or matching algorithms to decide who to contact or consider, disclosure is required. If you only use manual search and your own judgment, likely not covered.
Indeed Hiring Platform
- AI Features (Covered): Smart Sourcing (AI-powered candidate matching), Indeed Assessments with auto-scoring, "Best Match" candidate ranking, screening questions with automated filtering
- Non-AI Features (Not Covered): Basic job posting, receiving applications, manual resume review
- Verdict: If you use Indeed Assessments with auto-scoring, Smart Sourcing, or screening question filters that automatically eliminate candidates, you're using AI and must comply.
Applicant Tracking Systems (Greenhouse, Lever, Workday, iCIMS, Taleo)
- AI Features (Covered): Resume parsing with automated scoring, candidate ranking algorithms, knockout questions with auto-rejection, diversity analytics that influence decisions, predictive hiring tools
- Non-AI Features (Not Covered): Simple data storage, calendar scheduling, email communications, manual stage progression, workflow automation (moving candidates between stages without evaluation)
- Verdict: Most modern ATS platforms offer AI features—often enabled by default. You must audit your specific configuration. If the system scores, ranks, or automatically rejects candidates, it's AI.
Background Check Services (Checkr, Sterling, HireRight)
- AI Features (Covered): Automated adjudication (AI decides pass/fail based on background check results), risk scoring algorithms, predictive criminal recidivism models
- Non-AI Features (Not Covered): Pulling records and presenting them for human review without automated decision-making
- Verdict: If you use "auto-adjudication" or "recommended decision" features, you're using AI. Human-only review of raw background check data is not AI.
Skills Assessment Platforms (Codility, HackerRank, TestGorilla)
- AI Features (Covered): AI-powered plagiarism detection that disqualifies candidates, predictive scoring that estimates job performance, adaptive testing that adjusts difficulty based on ML models
- Non-AI Features (Not Covered): Fixed assessments with manual scoring, code playback for human review
- Verdict: Check whether the platform uses AI to score responses or predict outcomes. Many modern platforms do—especially for soft skills assessments.
❌ Generally Not Covered: Administrative and Non-Evaluative Tools
- Basic job boards (ZipRecruiter basic posting, Monster, CareerBuilder without AI matching)
- Calendar scheduling tools (Calendly, Chili Piper for interview booking)
- Video conferencing (Zoom, Teams, Google Meet for live interviews without analysis)
- Email/communication tools (Gmail, Outlook, candidate messaging)
- Manual resume review (Humans reading resumes without algorithmic assistance)
- Reference checking (Human-conducted phone or email reference checks)
- HRIS data storage (Systems that store employee data without making decisions)
The Gray Areas: Emerging Technologies
Large Language Models (ChatGPT, Claude, etc.)
If you use ChatGPT or similar tools to evaluate resumes, draft screening questions, or assess candidate responses, you're using AI. Even though these are general-purpose tools not marketed for hiring, using them to assist employment decisions brings them within regulatory scope.
Browser Extensions and Plugins
Chrome extensions that analyze LinkedIn profiles, score candidates, or provide recommendations likely qualify as AI tools even if they're not "official" hiring software.
Custom Internal Tools
If your engineering team built a custom tool—even a simple one—that scores, filters, or ranks candidates algorithmically, it's an AI hiring tool under most definitions. Home-grown tools are not exempt.
Common Misconceptions
Myth #1: "It's not machine learning, so it's not AI"
Reality: Most regulations define AI broadly to include rule-based systems, statistical models, and data analytics—not just modern machine learning. NYC explicitly includes "statistical modeling and data analytics." A rule-based system that auto-rejects candidates without a bachelor's degree qualifies as AI in many jurisdictions.
Myth #2: "A human makes the final decision, so we're exempt"
Reality: Regulations cover tools that "substantially assist" or "facilitate" human decisions. If the AI narrows your candidate pool from 1,000 to 20, even though a human picks the final 5, the AI substantially assisted. You're not exempt.
Myth #3: "Our vendor says they're compliant, so we're covered"
Reality: Most AI hiring laws place compliance obligations on the employer using the tool, not just the vendor. Vendor compliance is necessary but not sufficient. You must independently ensure your use of the tool complies with applicable laws.
Myth #4: "We don't call it AI, so it's not AI"
Reality: Regulatory definitions are based on functionality, not marketing labels. If a tool functions as AI under a legal definition—using computation to evaluate candidates—it's AI regardless of what the vendor calls it.
Myth #5: "Small tools or free tools don't count"
Reality: There is no "de minimis" exception for small-scale or low-cost AI. A free Chrome extension that scores LinkedIn profiles is still an AI tool if it influences your decisions.
How to Conduct an AI Tool Audit
Phase 1: Complete Inventory
Create a comprehensive spreadsheet listing every software tool, platform, or system involved in your hiring process:
- Sourcing tools (LinkedIn, Indeed, job boards, referral platforms)
- Application collection (ATS, careers site, application forms)
- Screening tools (resume parsers, knockout questions, pre-screen assessments)
- Assessment platforms (skills tests, personality assessments, situational judgment tests)
- Interview tools (video platforms, scheduling, interview guides)
- Background checks (criminal history, employment verification, education verification)
- Reference checking (automated surveys, phone systems)
- Decision support (hiring committee tools, offer management)
Phase 2: Functional Analysis
For each tool, answer these questions:
- 1. Does it analyze candidate data? (Parsing resumes, analyzing video, scoring responses)
- 2. Does it generate scores, rankings, or classifications? (Qualification scores, culture fit ratings, pass/fail decisions)
- 3. Does it provide recommendations? ("Best matches," "top candidates," "recommended for interview")
- 4. Does it automate decisions? (Auto-rejecting candidates, filtering applicants, knockout questions)
- 5. Does it use algorithms, machine learning, or statistical models? (Look in vendor documentation)
- 6. Do the outputs influence your hiring decisions? (Do you actually use the scores/rankings to decide who advances?)
If you answer "yes" to questions 1 AND (2, 3, or 4) AND 6 → the tool likely qualifies as AI.
Phase 3: Vendor Documentation Review
Review each vendor's:
- Marketing materials: Do they mention AI, ML, algorithms, or automation?
- Technical documentation: What does the system actually do behind the scenes?
- Privacy policy: Often describes data processing and algorithmic decision-making
- Terms of service: May include language about automated systems
Key search terms:
- Artificial intelligence, AI, machine learning, ML, deep learning
- Algorithm, automated decision-making, computational model
- Predictive analytics, matching technology, recommendation engine
- Natural language processing (NLP), computer vision
- Facial analysis, sentiment analysis, emotion recognition
- Statistical modeling, data science, analytics
Phase 4: Direct Vendor Inquiry
For any ambiguous tools, send this standard inquiry email:
Subject: AI/ADMT Compliance Information Request
Dear [Vendor Name],
As part of our compliance with AI hiring regulations (NYC Local Law 144, Illinois HB 3773, Colorado SB24-205, California CCPA ADMT, and related laws), we need to understand whether [Product Name] uses artificial intelligence, machine learning, or automated decision-making technology.
Please provide the following information:
- Does [Product Name] use AI, machine learning, statistical modeling, or algorithmic decision-making in any features we use?
- If yes, which specific features involve AI/automation?
- What data does the AI process and what outputs does it generate?
- Have you conducted bias testing or disparate impact analysis on the AI components?
- Can you provide documentation of compliance with applicable AI hiring laws?
Thank you for your prompt response.
Phase 5: Document Your Findings
Create a final compliance matrix:
| Tool Name | Vendor | Function | Uses AI? | Compliance Required? |
|---|---|---|---|---|
| Greenhouse | Greenhouse Software | ATS with candidate scoring | Yes | Disclosure, bias monitoring |
| Calendly | Calendly | Interview scheduling | No | None |
| HireVue | HireVue | Video interview analysis | Yes | Full compliance (audits, disclosure, etc.) |
Decision Tree: Is This Tool AI?
Start Here: Does the tool process candidate data?
→ No: Not AI. No compliance needed.
→ Yes: Go to next question ↓
Does it generate scores, rankings, recommendations, or automated decisions?
→ No: Likely not AI. Probably just data storage/admin.
→ Yes: Go to next question ↓
Do you use those outputs to make employment decisions?
→ No: Low risk, but still document that outputs are ignored.
→ Yes: Go to next question ↓
Does it use algorithms, ML, statistical models, or computational analysis?
→ Yes: This is AI. Compliance required.
→ Unsure: Ask the vendor. If they won't answer clearly, assume yes.
What To Do If You're Using AI
Once you've identified AI tools in your hiring process, compliance typically requires:
1. Candidate Notification (All Jurisdictions)
- Provide clear notice that AI is used in hiring
- Explain what the AI evaluates and how it influences decisions
- Timing: at or before AI is used (job posting, application, or assessment invitation)
- Method: prominent disclosure on careers page, job postings, or application forms
2. Bias Testing and Audits (NYC, Emerging in Other Jurisdictions)
- NYC requires annual independent bias audits testing for disparate impact
- Other jurisdictions recommend regular bias monitoring
- Test AI outputs for statistically significant disparities across race, gender, and other protected categories
3. Impact Assessments (Colorado, California)
- Document the purpose, functionality, and risks of high-risk AI systems
- Evaluate potential for discrimination and implement safeguards
- Submit to regulators upon request
4. Opt-Out or Alternative Processes (California)
- Provide mechanisms for candidates to opt out of automated decision-making
- Offer human review as an alternative
5. Documentation and Recordkeeping
- Maintain records of AI tools used, disclosures provided, and compliance efforts
- Retention period: typically 3-4 years
Tools by Use Case
Still unsure about specific tools? Here's a detailed breakdown by category:
Sourcing & Candidate Discovery
- LinkedIn Recruiter (AI features): AI
- SeekOut: AI (ML-powered sourcing)
- Phenom: AI (AI talent marketplace)
- Entelo: AI (predictive sourcing)
- Basic Boolean search on job boards: Not AI
Resume Screening
- Ideal: AI
- Eightfold.ai: AI
- HiredScore: AI
- Pymetrics: AI
- Manual resume reading: Not AI
Video Interviews
- HireVue (with assessment scoring): AI
- Spark Hire (basic recording only): Not AI
- Modern Hire: AI (if using scoring features)
- Zoom/Teams for live interviews: Not AI
Skills Assessments
- Codility (with AI scoring): AI
- HackerRank (with predictive scoring): AI
- TestGorilla: AI (if using AI proctoring or adaptive testing)
- Fixed assessments with manual scoring: Not AI
Next Steps
Now that you know what counts as AI, the next critical step is understanding your specific compliance obligations based on where you hire and what tools you use.
Take the Free Scorecard
Answer 5 quick questions and get a personalized compliance assessment.
Get Your Compliance Score →Related Resources
- → Complete AI Hiring Compliance Guide 2026
- → Bias Audit Implementation Guide
- → AI Disclosure Notice Templates
- → How to Assess AI Vendors
- → HireVue Compliance Guide
- → LinkedIn Recruiter Compliance
Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.