In early 2025, the EEOC quietly removed its guidance on artificial intelligence and workplace discrimination. The Department of Labor followed, withdrawing the Office of Federal Contract Compliance Programs' guidance on AI and equal employment opportunity for federal contractors. For many HR leaders, this looked like a green light to deploy automated hiring tools with fewer guardrails.
It isn't. The withdrawal of federal guidance doesn't change Title VII — employers remain fully liable for any tool that produces a disparate impact on protected groups. And since 2024, a growing number of states have independently enacted AI employment laws that apply right now, regardless of what the federal government does or doesn't regulate.
The State Law Landscape in 2026
The following laws are either currently in effect or taking effect this year. Federal contractors with distributed workforces or national recruiting pipelines are likely subject to more than one of them simultaneously.
| Jurisdiction | Key Requirement | Effective | Status |
|---|---|---|---|
| New York City Local Law 144 |
Independent bias audit required before using automated employment decision tools for hiring or promotion screening. Results must be publicly posted. Advance notice to candidates required. | July 2023 | In Effect |
| Illinois HB 3773 (IHRA amendment) |
Notice required when AI is used in recruiting, hiring, promotion, or other covered employment decisions. Prohibits using AI in ways that result in unlawful discrimination based on a protected class. | Jan 1, 2026 | In Effect |
| California Cal. Civil Rights Regs |
Unlawful to use any automated decision system that discriminates in employment. Employers must conduct proactive bias testing, maintain records for four years, and ensure meaningful human oversight with override authority. | Oct 1, 2025 | In Effect |
| Colorado SB 24-205 |
Requires bias audits for high-risk AI systems used in employment decisions, with disclosure obligations for developers and deployers. Applies to systems that make or substantially influence consequential decisions. | June 30, 2026 | Effective June 2026 |
Texas enacted its own AI governance law (TRAIGA) effective January 1, 2026, though the enacted version focuses primarily on government agencies rather than private employers. Legislation is pending in a dozen additional states.
The Vendor Agreement Doesn't Protect You
The most dangerous assumption in this space is that an AI tool purchased from a vendor transfers legal liability to that vendor. It doesn't. Under Title VII, an employer is responsible for the discriminatory effects of any employment practice, regardless of who built or configured the tool. If an ATS resume screener disproportionately filters out candidates of a protected class, the employer faces exposure — not the software company.
Buying AI from a vendor shifts none of the legal risk. Under Title VII, if the tool screens out protected-class candidates at a significantly different rate, the employer is the respondent — not the SaaS company whose terms of service disclaim employment law compliance entirely.
California's regulations make this explicit, requiring employers to proactively test for bias and maintain audit records for four years. NYC Local Law 144 requires a third-party bias audit report to be posted publicly before the tool is used at all. Neither framework offers a vendor-contract safe harbor.
What "Meaningful Human Oversight" Actually Requires
California's regulations specify that any automated decision system used in employment must have "meaningful human oversight" — someone trained and empowered to override the AI's output. This isn't a procedural formality. If a recruiter routinely accepts AI-generated rankings without independent review, the oversight requirement probably isn't met, and the employer can't credibly claim the AI was advisory rather than decisive.
For federal contractors, this matters because OFCCP compliance reviews — even under a reduced enforcement posture — can examine selection procedures under Uniform Guidelines on Employee Selection Procedures. Those guidelines apply to any selection procedure, regardless of whether it involves AI, and adverse impact analysis remains a standard part of a compliance review.
The Federal Preemption Question Isn't Settled
In December 2025, President Trump signed an executive order directing federal agencies to identify and limit state AI laws that interfere with national AI policy or interstate commerce. The order does not repeal any state law, and no court has yet enjoined any of the laws listed above on preemption grounds. Until that changes, employers must comply with applicable state and local requirements regardless of the federal administration's regulatory posture.
Legal observers have noted that preemption arguments face a difficult path given that state employment discrimination laws generally survive federal preemption analysis — states have historically been permitted to set higher standards than federal minimums in employment law.
A Practical Inventory
Before deploying or continuing to use any AI-assisted tool in the hiring process, HR leaders at federal contractors should be able to answer the following questions:
- In which states do we recruit or make employment decisions, and which AI employment laws apply there?
- Do we have documentation of a bias audit for each tool currently in use, and is that documentation current?
- Have we provided required notices to applicants in jurisdictions that require them?
- Is there a trained human reviewer with documented authority to override AI outputs at each stage of the selection process?
- Does our vendor contract establish what data we receive, how long it's retained, and what testing the vendor performs?
None of these questions require abandoning AI tools — they're the baseline for using them defensibly. The contractors most at risk aren't the ones who've thought carefully about these questions. They're the ones who haven't asked them yet.
Sources
- AI in Hiring: Diverging Federal and State Perspectives — Holland & Knight (2025)
- Federal Guidance on AI Reverses Course with New Administration — K&L Gates
- AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026 — HR Defense Blog
- AI in the Workplace: US Legal Developments — Cooley (2025)
- Artificial Intelligence Regulations: State and Federal AI Laws 2026 — Drata
- Employment Discrimination and AI for Workers — EEOC (2024)
- Trump Targets State AI Laws: HR Alert on New Executive Order — HR Morning