EEOC Issues New Resource On Artificial Intelligence Use In Employment Decisions

By Alex W. Karasik and Gerald L. Maatman, Jr.

Duane Morris Takeaways:  On May 18, 2023, the EEOC released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” (hereinafter, the “Resource”) to provide employers guidance on preventing discrimination when utilizing artificial intelligence. For employers who are contemplating whether to use artificial intelligence in employment matters such as selecting new employees, monitoring performance, and determining pay or promotions, this report is a “must-read” in terms of implementing safeguards to comply with civil rights laws.

Background

As the EEOC is well-aware, employers now have a wide variety of algorithmic decision-making tools available to assist them in making employment decisions, including recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, dismissal, and referral. Employers increasingly utilize these tools in an attempt to save time and effort, increase objectivity, optimize employee performance, or decrease bias. The EEOC’s Resource seeks to inform employers how to monitor the newer algorithmic decision-making tools and ensure compliance with Title VII.

To set the parameters for the Resource, the EEOC first defines a few key terms:

  • Software: Broadly, “software” refers to information technology programs or procedures that provide instructions to a computer on how to perform a given task or function.
  • Algorithm: Generally, an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end.
  • Artificial Intelligence: In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.

Taken together, employers sometimes utilize different types of software that incorporate algorithmic decision-making at a number of stages of the employment process. Some of the examples provided by the EEOC in terms of how employers can utilize artificial intelligence include: resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes; “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject candidates who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their speech patterns and facial expressions; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit,” which is typically based on their performance on a game or on a more traditional test.

“Questions And Answers” About AI

After summarizing the pertinent provisions of Title VII, the heart of the EEOC’s Resource is presented in a question and answer format. First, the EEOC defines a “selection procedure” to be any “measure, combination of measures, or procedure” if it is used as a basis for an employment decision. Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group. If there is an adverse impact, then use of the tool will run afoul of Title VII unless the employer can demonstrate that, pursuant to Title VII, such use is “job related and consistent with business necessity.”

The EEOC then posits the critical question of whether an employer is responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor. This is an important issue since many companies seek the assistance of third-party technologies to facilitate some of their employment-decision processes. The EEOC indicates that “in many cases, yes,” employers are responsible for the actions of their agents, such as third-party vendors. Ultimately, if the employer is making the final employment decision, the buck would likely stop with the employer in terms of Title VII liability .

The EEOC also defines the term, “selection rate,” which refers to the proportion of applicants or candidates who are hired, promoted, or otherwise selected. The selection rate for a group of applicants or candidates is calculated by dividing the number of persons hired, promoted, or otherwise selected from the group by the total number of candidates in that group. By virtue of including this definition in the Resource, a reading of the tea leaves suggests that the EEOC will be monitoring selection rates to determine whether there is an adverse impact in employment decisions that were catalyze from the use of artificial intelligence.

In terms of what is an acceptable selection rate, the EEOC relies on the “four-fifths rule,” which is a general rule of thumb for determining whether the selection rate for one group is “substantially” different than the selection rate of another group. The rule states that one rate is substantially different than another if their ratio is less than four-fifths (or 80%). For example, if the selection rate for Black applicants was 30% and the selection rate for White applicants was 60%, the ratio of the two rates is thus 30/60 (or 50%). Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule dictates that the selection rate for Black applicants is substantially different than the selection rate for White applicants, which may be evidence of discrimination against Black applicants.

The EEOC does note that the, “four-fifths rule” is a general suggestion, and may not be appropriate in every circumstance. Some courts have also found this rule to be inapplicable. Nonetheless, employers would be prudent to ask whether artificial intelligence vendors deployed the “four-fifths rule” in their algorithms. Statistics matter here.

Finally, the EEOC posits the issue of what an employers should do when they discover that the use of an algorithmic decision-making tool would result in an adverse impact. The EEOC explains that one advantage of algorithmic decision-making tools is that the process of developing the tool may itself produce a variety of comparably effective alternative algorithms. Accordingly, employers’ failure to adopt a less discriminatory algorithm that may have been considered during the development process could give rise to liability. Employers should thus take heed to document the steps they take to utilize non-discriminatory algorithms.

Implications For Employers

The use of artificial intelligence in employment decisions may be the new frontier for future EEOC investigations. While these technologies can have tremendous cost-benefits, the risk is undeniable. Inevitably, some employer using AI will be the subject of a test case in the future.

Employers should monitor the results of their own use of artificial intelligence. This can be accomplished by conducting self-analyses on an ongoing basis, to determine whether employment practices are disproportionately having a negative impact on certain protected classes.

As the EEOC notes, employers can proactively change the practices going forward. Given the agility of the artificial intelligence software, employers who do find the technologies’ “employment decisions” to be problematic can and should work with vendors to remedy such defects.

We encourage our loyal blog readers to stay tuned as we continue to report on this exciting and rapidly evolving area of law.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress