Menu
Blog Banner Image

The Modern Workplace

Does Your Company’s ArtificiaI Intelligence Software Violate the ADA?
Does Your Company’s ArtificiaI Intelligence Software Violate the ADA?

The French sociologist Jean Baudrillard once said: “The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.” While some may view this as a harsh critique of a tool that has improved many facets of modern society, artificial intelligence (“AI”) is not infallible, particularly in the employment context. The Equal Employment Opportunity Commission (“EEOC”) recently issued guidance on employers’ use of artificial intelligence in employment-related decisions, such as applicant screening, hiring, and performance evaluations. The driving force behind this guidance? The potential for unlawful discrimination against certain groups.

Signs Your AI Software Might Inadvertently be Discriminating Against Certain Groups

Federal employment laws, including the federal Americans with Disabilities Act (“ADA”), prohibit discrimination in employment based on various legally protected characteristics.  In a number of instances, the law prohibits both intentional discrimination and disparate impact discrimination that stems from a neutral policy or practice with an unlawful statistically significant impact on the protected group. 

Some Ways That AI Technology Might Be Used in Making Employment Decisions Include:

  • It can scan resumes or applications for certain key words and prioritize or filter those resumes or applications based on the number of key word hits.
  • It can test candidates and calculate “job fit” scores that are based on intangible things such as personality or “cultural fit.”
  • It can uses video software that evaluates candidates based on facial expressions, speech patterns, eye contact, and other mannerisms.
  • It can employ a “chatbot” that has been programmed to ask about candidates’ qualifications, and its code can automatically prevent those who don’t meet certain requirements from advancing to the next step in the application process.

The above examples of AI usage, while potentially a helpful tool for being efficient in screening and hiring, may lead to a violation of the ADA if an employer doesn’t take appropriate precautions. An AI tool like those mentioned above may (inadvertently or not) screen out an applicant with a disability – even though that applicant could very well perform the job duties with or without a reasonable accommodation.

Consider this example: Applicant A applies for a position with your company. Your company’s job application software directs all applicants to fill in their job history, including duration of each position held. Your application software then uses AI to review the submitted applications and – in what you thought would save you significant time leafing through resumes – automatically rejects any applicant with a gap in employment greater than two years. Applicant A has a three-year gap in their employment history due to their disability. However, Applicant A is capable of performing the job duties with a reasonable accommodation. Because of the algorithm in your company’s software, this individual will automatically lose out on that job opportunity, and unbeknownst to you, your company may be vulnerable to being accused of an ADA violation.

Here's another scenario to consider: Your company utilizes a “chatbot” as part of the user interface of your application portal. The chatbot was programmed to ask certain questions and to tailor additional questions based on an applicant’s response, in order to save you and your recruiting team valuable time and effort. The chatbot asks an individual applicant a question, and the response triggers a question about the applicant’s medical history. But no conditional offer of employment has been extended at this stage in the process, so the chatbot has violated the ADA by asking an improper medical question, irrespective of whether the applicant was screened out or invited to schedule an interview.

When Using AI Software to Make Employment Decisions, Do Your Homework

Employers who utilize AI tools to help streamline the employment decision-making process should know that they, not their AI software vendor, will likely be responsible for any resulting ADA violations. The following tips may help avoid a “failure to hire” lawsuit.

  1. Make sure that your team engages with vendors before signing a contract to use their software. Ask them questions that will help you evaluate whether their software was created with individuals with disabilities in mind. For example: What are the tool’s accessibility features? Are there alternative formats available? Does the tool require the use of a mouse or keyboard? Is it compatible with screen readers?
  2. Understand that not everything needs to be automated, and sometimes there is no adequate substitute for the human mind. Limit your reliance on AI tools in the hiring/evaluation process to those that focus on evaluating skills, patterns, and behaviors that are not just directly related, but also necessary, for the specific position.
  3. Transparency is key. Inform all applicants upfront that your application tool uses AI software and explain in layman’s terms what the tool has been designed to evaluate, and how it will do so. Include conspicuous and clear language at the beginning of the application process stating that reasonable accommodations are available and explaining how an applicant can make such a request. Train your team to ensure that they will properly identify and process requests for reasonable accommodation.
  4. Whether you develop your own algorithms in-house or you contract with a software vendor, consider consulting with an attorney to review chatbot questions, evaluation tools, and screening methods before deploying them.
  5. Consider working with your vendor to validate the AI technology used and to screen the technology to ensure it has no intentional or disparate impact on disabled individuals.
Email LinkedIn Twitter Facebook

The information contained in this post is provided to alert you to legal developments and should not be considered legal advice. It is not intended to and does not create an attorney-client relationship. Specific questions about how this information affects your particular situation should be addressed to one of the individuals listed. No representations or warranties are made with respect to this information, including, without limitation, as to its completeness, timeliness, or accuracy, and Lathrop GPM shall not be liable for any decision made in connection with the information. The choice of a lawyer is an important decision and should not be based solely on advertisements.

Topics

Archives

2024

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

Blog Authors

Recent Posts