Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Three Best Practices to Mitigate High-Stakes AI Litigation Risk.” The article is available here and is a must-read for corporate counsel.
Organizations using AI-based technologies that perform facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, generative AI, and more increasingly face the risk of being targeted by class action lawsuits and government enforcement actions alleging that they improperly obtained, disclosed, and misused personal data of website visitors, employees, customers, students, patients, and others, or that they infringed copyrights, fixed prices, and more. These disputes often seek millions or billions of dollars against businesses of all sizes. This article identifies recent trends in such varied but similar AI litigation, draws common threads, and discusses three best practices that corporate counsel should consider to mitigate AI litigation risk: (1) add or update arbitration clauses to mitigate the risks of mass arbitration; (2) collaborate with information technology, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and (3) update notices to third parties and vendor agreements.
Implications For Corporations
Companies using AI technologies face multimillion- or billion-dollar risks of litigation seeking statutory and common-law damages under a wide variety of laws, including privacy statutes, wiretap statutes, unfair and deceptive practices statutes, antidiscrimination statutes, copyright statutes, antitrust statutes, common-law invasion of privacy, breach of contract, negligence, and more. This article analyzes litigation brought under these laws and offers corporate counsel three best practices to mitigate the risk of similar cases.
Duane Morris Takeaway: Available now is the recent article in the American Bar Association’s magazine “The Brief” by Partner Alex Karasik entitled “An Examination of the EEOC’s Artificial Intelligence Evolution.”[1]The article is available here and is a must-read for all employers and corporate counsel!
In the aftermath of the global pandemic, employee hiring has become a major challenge for businesses across the country, regardless of industry or region. Businesses want to accomplish this goal in the most time- and cost-effective way possible. Employers remain in vigorous pursuit of anything that can give them an edge in recruiting, hiring, onboarding, and retaining the best talent. In 2023, artificial intelligence (AI) emerged as the focal point of that pursuit. The use of AI offers an unprecedented opportunity to facilitate employment decisions. Whether it is sifting through thousands of resumes in a matter of seconds, aggregating information about interviewees’ facial expressions, or generating data to guide compensation adjustments, AI has already had a profound impact on how businesses manage their human capital.
Title VII of the Civil Rights Act of 1964, which is the cornerstone federal employment discrimination law, does not contain statutory language specifically about the use of AI technologies, which did not emerge until several decades later. However, the U.S. Equal Employment Opportunity Commission (EEOC), the federal government agency responsible for enforcing Title VII, has made it a strategic priority to prevent and redress employment discrimination stemming from employers’ use of AI to make employment decisions regarding prospective and current employees.
Focusing on the EEOC’s pioneering efforts in this space, this article explores the risks of using AI in the employment context. First, the article examines the current litigation landscape with an in-depth case study analysis of the EEOC’s first AI discrimination lawsuit and settlement. Next, to figure out how we got here, the article travels back in time through the origins of the EEOC’s AI initiative to present-day outreach efforts. Finally, the article reads the EEOC’s tea leaves about the future of AI in the workplace, offering employers insight into how to best navigate the employment decision-making process when implementing this generation-changing technology.
Implications For Employers: Similar to the introduction of technologies such as the typewriter, computer, internet, and cell phone, there are, understandably, questions and resulting debates about the precise impact that AI will have on the business world, including the legal profession. To best adopt any new technology, one must first invest in understanding how it works. The EEOC has done exactly that over the last several years. The businesses that use AI software to make employment decisions must similarly make a commitment to fully understand its impact, particularly with regard to applicants and employees who are members of protected classes. The employment evolution is here, and those who are best equipped to understand the risks and rewards will thrive in this exciting new era.
By Gerald L. Maatman, Jr., Alex W. Karasik, and George J. Schaller
Duane Morris Takeaway: Artificial intelligence took the employment world by storm in 2023, quickly becoming one of the most talked about and debated subjects among corporate counsel across the country. Companies will continue to use AI as a resource to enhance decision-making processes for the foreseeable future as these technologies evolve and take shape in a myriad of employment functions. As these processes are fine-tuned, those who seek to harness the power of AI must be aware of the risks associated with its use. This featured article analyzes two novel AI lawsuits and highlights recent governmental guidance related to AI use. As the impact of AI is still developing, companies should recognize the types of claims apt to be brought for use of AI screening tools in the employment context and the implications of possible discriminatory conduct stemming from these tools.
In the Spring 2024 issue of the Journal of Emerging Issues in Litigation, Duane Morris partners Jerry Maatman and Alex Karasik and associate George Schaller analyze key developments in litigation and enforcement shaping the impact of artificial intelligence in the workplace and its subsequent legal risks. Read the full featured article here.
Duane Morris Takeaway: This week’s episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and associate Alessandra Mungioli with their discussion of 2023 developments and trends in consumer fraud class action litigation as detailed in the recently published Duane Morris Consumer Fraud Class Action Review – 2024.
Jerry Maatman: Welcome loyal blog listeners. Thank you for being on our weekly podcast, the Class Action Weekly Wire. My name is Jerry Maatman, I’m a partner at Duane Morris, and joining me today is my colleague, Alessandra. Thank you for being on our podcast to talk about thought leadership with respect to class actions.
Alessandra Mungioli: Thank you, Jerry. I’m glad to be here.
Jerry: Today we’re going to discuss our recent publication, our e-book on the Duane Morris Consumer Fraud Class Action Review. Listeners can find this book on our blog. Could you tell us a little bit about what readers can expect from this e-book?
Alessandra: Absolutely Jerry. Class action litigation in the consumer fraud space remains a key focus of the plaintiff’s bar. A wide variety of conduct gives rise to consumer fraud claims which typically involve a class of consumers who believe they were participating in a legitimate business transaction, but due to a merchant or a company’s alleged deceptive or fraudulent practices, the consumers were actually being defrauded.
Every state has consumer protection laws, and consumer fraud class actions require courts to analyze these statutes, both with respect to plaintiffs’ claims and also with respect to choice of law analyses when a complaint seeks to impose liability that is predicated on multiple states’ consumer protection laws.
To assist corporate counsel and business leaders with navigating consumer fraud class action litigation, the class action team here at Duane Morris has put together the Consumer Fraud Class Action Review, which analyzes significant rulings, major settlements, and identifies key trends that are apt to impact companies in 2024.
Jerry: This is a great, essential desk reference for practitioners and corporate counsel alike dealing with class actions in this space. Difficult to do in a short podcast, but what are some of the key takeaways in that desk reference?
Alessandra: Just as the type of actionable conduct varies, so, too, do the industries within which consumer fraud claims abound. In the last several years, for example, the beauty and cosmetics industry saw a boom in consumer fraud class actions as consumers demanded increased transparency regarding the ingredients in their cosmetic products and the products’ effects. In 2023, consumer fraud class actions ran the gamut of false advertising and false labeling claims as well.
Artificial intelligence also made its way into the class action arena in the consumer fraud space for the first time in 2023. In MillerKing, LLC, et al. v. DoNotPay Inc., the plaintiff, a Chicago law firm, filed a class action alleging the defendant, an online subscription service that uses “robot lawyers” programmed with AI, was not licensed to practice law and therefore brought claims for consumer fraud, deceptive practices, and breach of trademark. The defendant moved to dismiss the action on the basis that the plaintiff failed to establish an injury-in-fact sufficient to confer standing, which the court granted. The plaintiff asserted that the conduct caused “irreparable harm to many citizens, as well as to the judicial system itself,” and constituted “an infringement upon the rights of those who are properly licensed,” such as “attorneys and law firms.” The court found that the plaintiff failed to demonstrate any real injury per its claims, and granted the defendant’s motion to dismiss.
Jerry: Well, robot lawyers and lawyer bots – that’s quite a development in 2023. How did the plaintiffs’ bar do in – what I consider the Holy Grail in this space – securing class certification, and then conversion of a certified class into a monetary class-wide settlement?
Alessandra: So settlements were very lucrative in 2023. The top 10 consumer fraud class action settlements in 2023 totaled $3.29 billion. And by comparison, the top 10 settlements in 2022 had totaled $8.5 billion, so we have seen a downward trend. Notably, five of these 10 settlements last year took place in California courts. The top settlements in 2023 resolved litigation stemming from a variety of different theories, from smartphone performance issues to the marketing of vape products. Last year, courts granted plaintiffs’ motions for class certification in consumer fraud lawsuits approximately 66% of the time. And the overall certification rate for class actions in 2023 was 72%.
Jerry: Well, that’s quite a litigation scorecard. And this is an area of interest that the class action team at Duane Morris will be following closely and blogging about in 2024. Well, thank you for being with us today and thank you loyal blog readers and listeners for joining our weekly podcast again. You can download the Duane Morris Consumer Fraud Class Action Review off our website. Have a great day!
Duane Morris Takeaways: Privacy and data breach class action litigation are among the key issues that keep businesses and corporate counsel up at night. There was over $1 billion dollars procured in settlements and jury verdicts over the last year for these types of “bet-the-company” cases. At the ALM Law.com Legalweek 2024 conference in New York City, Partner Alex W. Karasik of the of the Duane Morris Class Action Defense Group was a panelist at the highly anticipated session, “Trends in US Data Privacy Laws and Enforcement.” The conference, which had over 6,000 attendees, produced excellent dialogues on how cutting-edge technologies can potentially lead to class action litigation. While A.I. took the main stage, along with an epic keynote speech from revered actor, Bryan Cranston, privacy and data-management issues were firmly on the radar of attendees.
Legalweek’s robust agenda covered a wide-range of global legal issues, with a prominent focus on the impact of technology and innovation. Some of the topics included artificial intelligence, data privacy, biometrics, automation, and cybersecurity. For businesses who deploy these technologies, or are thinking about doing so, this conference was informative in terms of both their utility and risk. The sessions provided valuable insight from a broad range of constituents, including in-house legal counsel, outside legal counsel, technology vendors, and other key players in the tech and legal industries.
I had the privilege of speaking about how data privacy laws and biometric technology have impacted the class action litigation space. Joining me on the panel was Christopher Wall (Special Counsel for Global Privacy and Forensics, and Data Protection Officer, HaystackID); Sonia Zeledon (Associate General Counsel Compliance, Risk, Ethics, and Privacy, The Hershey Company); and Pallab Chakraborty (Director of Compliance & Privacy, Xilinx). My esteemed fellow panelists and I discussed how the emerging patchwork of data privacy laws – both in the U.S. and globally – create compliance challenges for businesses. I provided insight on how high-stakes biometric privacy class action litigation in Illinois can serve as a roadmap for companies, as similar state statutes are emerging across the country. In addition, I explored how artificial intelligence tools used in the employee recruitment and hiring processes can further create potential legal risks. Finally, I shared my prediction of how the intersection of ESG and privacy litigation will continue to emerge as a hot area for class action litigation into 2024 and beyond.
Finally, and probably the most important update to many of you, Bryan Cranston’s keynote address was awesome! Covering the whole gamut of the emotional spectrum, Bryan was fascinating, inspirational, and hilarious. Some of the topics he discussed included the importance of family, the future impact of A.I. on the film industry, his mescal brand, and a passionate kiss during his first acting scene at 19. Bryan was a tough act follow!
Thank you to ALM Law.com, the Legalweek team, my fellow panelists, the inquisitive attendees, the media personnel, and all others who helped make this week special
By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller
Duane Morris Takeaways: In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. Jan 19, 2024) (ECF No. 45), Judge Rita F. Lin of the U.S. District Court for the Northern District of California dismissed a lawsuit against Workday involving allegations that algorithm-based applicant screening tools discriminated applicants on the basis of race, age, and disability. With businesses more frequently relying on artificial intelligence to perform recruiting and hiring functions, this ruling is helpful for companies facing algorithm-based discrimination lawsuits in terms of potential strategies to attack such claims at the pleading stage.
Case Background
Plaintiff, an African-American male over the age of forty with anxiety and depression, alleged that he applied to 80 to 100 jobs with companies that use Workday’s screening tools. Despite holding a bachelor’s degree in finance and an associate’s degree in network systems administration, Plaintiff claimed he did not receive not a single job offer. Id. at 1-2.
On July 19, 2021, Plaintiff filed an amended charge of discrimination with the Equal Employment Opportunity Commission (“EEOC”). On November 22, 2022, the EEOC issued a dismissal and notice of right to sue. On February 21, 2023, Plaintiff filed a lawsuit against Workday, alleging that Workday’s tools discriminated against job applicants who are African-American, over the age of 40, and/or disabled in violation of Title VII, the ADEA, and the ADA, respectively.
Workday moved to dismiss the complaint, arguing that Plaintiff failed to exhaust administrative remedies with the EEOC as to his intentional discrimination claims; and that Plaintiff did not allege facts to state a plausible claim that Workday was liable as an “employment agency” under the anti-discrimination statutes at issue.
The Court’s Decision
The Court granted Workday’s motion to dismiss. First, the Court noted the parties did not dispute that Plaintiff’s EEOC charge sufficiently exhausted the disparate impact claims. However, Workday moved to dismiss Plaintiff’s claims for intentional discrimination under Title VII and the ADEA on the basis of his failure to exhaust administrative remedies. Workday argued that the EEOC charge alleged only claims for disparate impact, not intentional discrimination.
Rejecting Workday’s argument, the Court held that it must construe the language of the EEOC charge with “utmost liberality since they are made by those unschooled in the technicalities of formal pleading.” Id. at 5 (internal quotation marks and citations omitted). The Court acknowledged that the thrust of Plaintiff’s factual allegations in the EEOC charge concerned how Workday’s screening tools discriminated against Plaintiff based on his race and age. However, the Court held that those claims were reasonably related to his intentional discrimination claims, and that the EEOC investigation into whether the tools had a disparate impact or were intentionally biased would be intertwined. Accordingly, the Court denied Workday’s motion to dismiss on the basis of failure to exhaust administrative remedies.
Next, the Court addressed Workday argument that Mobley did not allege facts to state a plausible claim that it was liable as an “employment agency” under the anti-discrimination statutes at issue. The Court opined that Plaintiff did not allege facts sufficient to state a claim that Workday was “procuring” employees for these companies, as required for Workday to qualify as an “employment agency.” Id. at 1. For example, Plaintiff did not allege details about his application process other than that he applied to jobs with companies using Workday, and did not land any job offers. The complaint also did not allege that Workday helped recruit and select applicants.
In an attempt to salvage these defects at the motion hearing and in his opposition brief, Plaintiff identified two other potential legal bases for Workday’s liability — as an “indirect employer” and as an “agent.” Id. To give Plaintiff an opportunity to attempt to correct these deficiencies, the Court granted Workday’s motion to dismiss on this basis, but with leave for Plaintiff to amend. Accordingly, the Court granted in part and denied in part Workday’s motion to dismiss.
Implications For Businesses
Artificial intelligence and algorithm-based applicant screening tools are game-changers for companies in terms of streamlining their recruiting and hiring processes. As this lawsuit highlights, these technologies also invite risk in the employment discrimination context.
For technology vendors, this ruling illustrates that novel arguments about the formation of the “employment” relationship could potentially be fruitful at the pleading stage. However, the Court’s decision to let Plaintiff amend the complaint and have one more bite at the apple means Workday is not off the hook just yet. Employers and vendors of recruiting software would be wise to pay attention to this case –and the anticipated wave of employment discrimination lawsuits that are apt to be filed – as algorithm-based applicant screening tools become more commonplace.