Class Action Law Forum – Recent Developments Regarding The Impact Of Artificial Intelligence

By Jennifer A. Riley

I was honored to speak today at the 6th Annual Class Action Law Forum at the University of San Diego School of Law.

With hundreds of attendees, the conference focused on the current state of class action litigation and “white hot” litigation topics for 2024. The discussion points provide an excellent roadmap for practitioners and corporate counsel alike on the types of cases and legal issues that Corporate America is likely to encounter over the remainder of 2024.

The Impact of Artificial Intelligence

The theme of my address involved the extraordinary impact of AI on the class action space over the past year.  Aside from improving the efficiency with which the plaintiffs’ class action bar may be able to file and litigate claims, generative AI is providing an ocean of raw material for potential class claims.

Over the past year, we saw AI promptly become a popular subject of class actions in multiple areas.  I touched on three in particular.

AI-Assisted Decision-Making

The first area targets companies that use AI to enhance or streamline their decision-making processes.  Plaintiffs have filed suits against insurers, for instance, that use algorithms to adjudicate claims as well as against agencies that use programs to evaluate governmental benefits.

This type of claim frequently arises in the employment context as companies use algorithms to streamline and enhance their candidate screening and selection procedures and to inform their promotion, transfer, and evaluation decisions.

In May 2023, the EEOC issued a technical assistance document entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII,” which purports to provide employers guidance on preventing discrimination when using AI.  The EEOC provided various examples in terms of how employers are using AI, including resume scanners that prioritize resumes that use certain key words, employee monitoring software that rates employees on the basis of key strokes, virtual assistants or chatbots that ask job applicants about their qualifications, and video interviewing software that evaluates candidates based on their speech patterns and facial expressions.

Unsurprisingly, we have started to see lawsuits attacking use of these types of tools.  In Mobley v. Workday, filed in the Northern District of California, for instance, a plaintiff, an African American male over the age of 40 who claimed that he suffered from anxiety and depression brought suit against Workday claiming that its applicant screening tools discriminated against applicants on the basis of race, age, and disability.  The plaintiff claimed that he applied for 80 to 100 jobs, and despite holding a bachelor’s degree in finance, among other qualifications, did not get a single job offer.

Workday, of course, is a software vendor.  The district court granted the defendant’s motion to dismiss on the ground that plaintiff failed to plead sufficient facts regarding Workday’s supposed liability as an employer or “employment agency.”  In other words, the plaintiff failed to allege that Workday was “procuring” employees for its customers and merely claimed that he applied for jobs with a number of companies that all happened to use Workday.  On February 20, 2024, the plaintiff filed an amended complaint alleging that Workday was an agent of the employers that delegated authority to Workday to make hiring process decisions or, alternatively, that Workday was an indirect employer.

This is a prime example of a case to watch as we head through 2024 where plaintiffs are seeking to hold a software vendor liable for the use of its product by others.

Privacy Class Actions Targeting AI

The second area I touched on relates to privacy class actions.  Companies that develop AI products have faced a slew of class action lawsuits alleging privacy violations.  The allegation essentially has been that, by collecting publicly-available data to develop and train their software, developers of AI products stole private and personal information from millions of individuals.

In cases like PM v. OpenAI, as an example, groups of plaintiffs filed class action lawsuits against OpenAI and Microsoft alleging that, by collecting information from the internet to develop and train AI tools like ChatGPT, they stole private information from millions of people.  Other lawsuits have been filed against companies like Open AI as well as Google alleging similar claims, including a recent example, AS v. Open AI, filed in the Northern District of California on February 27, 2024.

Copyright Class Actions Targeting AI

Third, in addition to privacy class actions, technology companies have been hit with a surge of recent lawsuits over the alleged “scraping” of copyrighted materials and personal data from across the internet to train their generative AI systems.

On February 28, 2024, for instance, Intercept Media filed suit in the Southern District of New York against Open AI and Microsoft.  It alleged that, at least some of the time, ChatGPT provides responses to its users that regurgitate verbatim – or nearly verbatim – copyright protected works of journalism without providing (and even allegedly intentionally excluding) the author, title, copyright, or terms of use information contained in those works.

In terms of other examples, at the end of last year, the New York Times filed a similar lawsuit alleging copyright infringement in both the input and output of Open AI models.  The Authors Guild of America filed a class action suit in September 2023 against MicroSoft and Open AI on behalf of tens of thousands of authors alleging willful violations of copyright laws.  In the suit, they allege that the two companies reproduced and appropriated the copyrighted work of tens of thousands of authors to train their AI models.  In Andersen v. Stability AI, a group of artists claimed that Stability AI created a software program that downloaded billions of copyrighted images to train and to act as a software library for a variety of visual generative AI platforms.  They claimed that, having been trained on their works, the software could generate output in their own artistic styles.

Many of these class actions are just getting off the ground.  As the results at the motion to dismiss stage continue to be mixed, it suggests that a model for successfully pleading and prosecuting these types of class actions is still a work in progress.

As courts start to weave their patchwork quilt of rulings, I expect we are seeing the tip of the iceberg in the types and numbers of filings we are likely to see on the generative AI class action front.

Other Hot Topics

The conference speakers covered myriad other timely and hot issues in the class action space, including the state of the current law on concepts such as ascertainability, standing, class-wide injury, and manageability at the class certification stage.   A recurrent issue was standing and class-wide injury.  Even if a court can “generally” determine class-wide injury at the certification and trial phases, how can it manageably resolve individualized questions at the damages phase?

The panelists likewise covered practical aspects of class-wide trials and mass arbitration, including best practices in preparing for and presenting cases for trial including use of video evidence such as video-taped depositions, use of demonstrative evidence at trial, and use of pre-trial focus groups to test and develop key themes and tell a story that resonates with the jury.

In sum, 2024, is shaping up to be a transformative year on the class action litigation front.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress