As artificial intelligence (AI) becomes increasingly integrated into hiring and employment practices, including performance management and termination decisions, there is growing legal scrutiny around the use of AI in making employment decisions. Recent lawsuits are putting the spotlight on how AI-powered tools may unintentionally involve discriminatory bias. There are lessons to be learned from this ongoing litigation and steps that employers should consider taking now to ensure these tools do not perpetuate bias or violate legal standards.
Ongoing Lawsuits
In Mobley v. Workday, Inc., a federal lawsuit filed in the Northern District of California, the plaintiff alleges that Workday’s AI-driven applicant recommendation system utilized by employers violates federal antidiscrimination laws by having a disparate impact on job applicants based on race, age, and disability. The plaintiff, who is over 40, alleges that he submitted over 100 job applications through platforms using Workday’s AI tools and was rejected each time. He claims the system unfairly penalizes older candidates and reflects employer bias due to the system’s reliance on biased training data.
Although Workday was not the plaintiff’s employer, the lawsuit argues that it can be held liable as an “agent” of employers who use its software. The case gained momentum when, in July 2024, the court denied Workday’s second motion to dismiss, allowing the claims to proceed. Then, on May 16, 2025, the court granted conditional certification of the Age Discrimination in Employment Act (ADEA) claims, enabling the case to move forward as a nationwide collective action. The court’s decision to certify the collective, involving potentially hundreds of millions of applicants, signals growing judicial scrutiny of AI systems used in employment contexts. This case is one of the first major legal challenges to the use of AI in hiring decisions and could set a precedent for how algorithmic tools are regulated under employment law.
Similarly, in Harper v. Sirius XM, a federal lawsuit filed in the Eastern District of Michigan, the plaintiff alleges that Sirius XM’s use of AI-powered hiring software unlawfully discriminates against black applicants. In Harper, the plaintiff claims he applied for about 150 positions at Sirius XM but was either automatically rejected or dismissed after a single interview. The plaintiff is seeking class action status, citing violations of Title VII of the Civil Rights Act. The complaint argues that Sirius XM’s reliance on AI hiring tools perpetuates historical bias by using proxies such as employment history, geography, and education, factors that can disproportionately disadvantage black candidates.
These cases highlight the growing legal risks companies face when deploying AI hiring tools without adequate bias audits or transparency.
Lessons Learned
- Human oversight is essential. AI should support, not replace, human judgment.
- A deep understanding of the tools is necessary. Employers must understand and be able to explain how the AI makes decisions. A common pitfall is overreliance on vendor representations as to how tools work, rather than sufficient independent analysis.
- Stay informed and agile. National employers, in particular, must stay informed on the patchwork of AI regulations across multiple jurisdictions and navigating local laws. As an example, some states have required notices by employers for the use of AI or privacy notices as to how applicant or employee data is being used.
Proactive Steps for Employers to Reduce Legal Risk
- Audit AI systems regularly to detect and eliminate discriminatory outcomes, especially those affecting protected characteristics like race, gender, age, or disability. Ensuring that AI tools are trained on diverse and representative data is critical to minimizing bias.
- Focus on transparency, disclosure, data privacy, and accessibility before implementing AI hiring software. Employers must be able to explain how the AI makes decisions, inform applicants when AI is used, and ensure compliance with privacy laws. Legal counsel should be involved early to help navigate these complexities and reduce risk.
- Consider appointing a compliance lead to monitor evolving laws and applying the strictest standards company-wide to help ensure consistency and legal compliance. Employers should also document their efforts to comply with regulations, as this can serve as a protective measure in the face of legal scrutiny.
Ultimately, navigating the evolving legal landscape around AI requires caution, adaptability, and ongoing diligence. Employers must treat compliance as a continuous process, not a one-time task, and invest in systems and partnerships that can evolve with the law. By doing so, they can harness the benefits of AI while upholding fairness, transparency, and legal integrity in their employment practices.