What The Workday Lawsuit Reveals About AI Bias—And How To Prevent It

Workday, Inc is facing a collective-action lawsuit based on claims that the artificial intelligence … More
HR finance company Workday, Inc is facing a collective-action lawsuit based on claims that the artificial intelligence used by the company to screen job applicants discriminated against candidates 40 years old and over. In 2024, Derek Mobley filed an employment discrimination lawsuit against Workday, alleging that their algorithm-based job applicant screening system discriminated against him and other applicants based on race, age and disability. Four additional plaintiffs have now accused the company of age discrimination. A Workday spokesperson refuted the claims to HR Drive stating “This is a preliminary ruling at an early stage of this case, and before the facts have been established. We’re confident that once those facts are presented to the court, the plaintiff’s claims will be dismissed.”
Data compiled by DemandSage estimates that in 2025, 87% of companies use AI for recruitment. Applicant tracking systems like Workable, Bamboo HR, Pinpoint ATS, and Rippling, which are used by employers to help manage the recruitment and hiring process, rely on AI to help streamline and automate the recruitment process. Companies are leaning heavily on AI to make crucial recruitment and hiring decisions, but these tools used so frequently during the employment process are laden with bias. One example was an AI recruiting tool used by Amazon.com Inc’s machine-learning specialists, which was found to discriminate against women—the company scrapped the tool in 2018.
AI bias is pervasive in recruitment and hiring tools.
AI bias is pervasive in recruitment and hiring tools. A 2024 study from the University of Washington revealed racial and gender bias in AI tools used to screen resumes. There can be data bias, which is when AI systems are trained on biased data that can contain an overrepresentation of some groups (white people for example) and an underrepresented of other groups (non-white people for example). This can manifest into an AI tool that ends up rejecting qualified job candidates because it was trained on biased data. There is also algorithmic bias, which can include developer coding mistakes, where a developer’s biases become embedded into an algorithm. An example of this is an AI system designed to flag job applicants whose resume’s include certain terms meant to signal leadership skills like “debate team,” “captain” or “president.” These key terms could end up filtering out job candidates from less affluent backgrounds or underrepresented racial groups, whose leadership potential might show up in non-traditional ways.
Two other types of bias, proxy data bias and evaluation bias, can show up in recruitment and hiring tools. Proxy data bias can be described as the bias that shows up when proxies, or substitutes, are used for attributes like race and gender. An example of this is an algorithm that prioritizes job candidates who attended Ivy League or elite institutions, which may filter out candidates who went to historically Black colleges and universities (HBCUs), community colleges or state schools. Evaluation bias is the bias that results when evaluating the data. An example of this is if an organization is assessing a candidate for culture fit (which is notoriously biased), and trains an AI tool to prioritize job candidates who have particular hobbies listed on their resumes or who communicate in particular ways, which can bias candidates from cultures outside of the dominant norm.
An algorithm that prioritizes job candidates who attended Ivy League or elite institutions may … More
As more organizations use AI to help with employment decisions, there are several steps that should be taken to mitigate the bias often baked into these tools. First, workplaces that utilize AI tools for hiring, selection and recruitment decisions should demand transparency from vendors to gain a deeper understanding of how the data was trained and what is being done to ensure the data has been audited for bias related to factors like race, gender, age and disability. In addition, companies should request frequent audits from the vendors to assess AI tool for bias. It’s important for organizations to partner with experts in ethical AI usage in the workplace to ensure that when AI is integrated into workplace systems, there are safeguards in place. For example, an expert may assess whether job candidates from HBCUs are being filtered out of the talent pool.
When using AI in any capacity in the workplace, it’s helpful to seek guidance from your legal counsel or legal team to ensure AI tools are compliant with local and state laws. Transparency also applies to workplaces—organizations should be candid about AI usage during the employment process and should always consider alternative evaluation methods. AI, in many ways, has made our lives easier, more convenient and more accessible but there are valid concerns when it comes to AI usage and fairness. If equity is the goal and your workplace uses AI for recruitment and hiring decisions, it’s good to trust the AI (to a reasonable extent) but always verify. AI is a powerful way to complement the employment process but should never replace human oversight.
Source link