Duane Morris Takeaways: USA-based companies are embracing use of artificial intelligence. At today’s Employment Practices Liability Insurance Conference in Chicago, Jerry Maatman of the Duane Morris Class Action Defense Group served as one of the co-hosts of the Conference, which addressed a broad range of topics on employment-related litigation and risk transfer strategies. Commissioner Keith Sonderling of the U.S. Equal Employment Opportunity Commission gave the keynote address at the Conference on the Legal Implications of Artificial Intelligence (“AI”) in the Workplace. Commission Sonderling shred his thoughts on the what, how, and why corporations should be “looking around the corner” to ready themselves for new class action theories and possible EEOC litigation over the use of AI.
This blog post summarizes some of the salient points from the keynote address. For corporate counsel and HR professionals, Commissioner Sonderling’s insights are invaluable.
The Context
AI in the workplace is ubiquitous and expanding rapidly. It is not only about replacing workers with robots, but instead is about the broader notion of using AI tools to assist with employment-related decisions. Commissioner Sonderling, more than any other EEOC official, has labored extensively in this area in terms of writing professional papers, giving speeches, and spearheading the Commission’s guidance in this area.
The Three Buckets Of AI
Commissioner Sonderling suggested that it is helpful to place AI-related questions into three buckets – including (i) the generative AI bucket; (ii) AI decision-making tools; and (iii) AI tools for employee monitoring and privacy.
Generative AI is starting to replace knowledge workers. That said, the decision to replace jobs may impact protected category groups disproportionally. Essentially, think of this as a high-tech RIF process. Plaintiffs’ class action lawyers or government enforcement litigators may assert that such decisions inevitably target older workers or less educated workers who are more diverse, especially if “last in are the first out” in terms of the replacement process. The bottom line is that there is lots of potential for disparate impact discrimination claims.
As to HR Departments using AI as decision-making tools, the challenge is the integrity of decision-making processes. Commissioner Sonderling asserted that the EEOC is focused on and concerned with AI bias and use of such tools to discriminate either intentionally or by disparate impact. This implicates both the design and type of use of the AI tools. He predicted that future lawsuits in this space would be more challenging and broader than ever before in terms of systemic lawsuits attacking an employer’s policies or practices.
Relative to AI tools for employee monitoring and privacy, Commission Sonderling suggested that states are getting into the mix by passing laws that regulate monitoring (e.g., Illinois through the Biometric Information Privacy Act). Federal legislation is likely years in the distance. Commissioner Sonderling opined that federal legislation may evolve in the copyright space as an initial first step.
The Takeaways For Employers
In his Q&A session at today’s Program, Commissioner Sonderling indicated that these evolving areas are likely to spike extensive litigation against employers in the future. He predicted that the plaintiffs’ class action bar will bring the lion’s share of the cases, as the EEOC has a limited budget and bandwidth to sue (e.g., in the 2023 Fiscal Year just ended on September 30, 2023, the Commission brought less than 150 lawsuits). He also opined that the EEOC will be focused on the employment decision at issue, so an employer’s reliance on the testing of an AI tool undertaken by a software vendor will not insulate an employer from potential liability for an allegedly discriminatory employment-related decision.