The FTC Issues Three New Orders Showing Its Increased 2024 Enforcement Activities Regarding AI And Adtech

By Gerald L. Maatman, Jr. and Justin R. Donoho

Duane Morris Takeaways: On December 3, 2024, the Federal Trade Commission (FTC) issued an order in In Re Intellivision Technologies Corp., (FTC Dec. 3, 2024) prohibiting an AI software developer from making misrepresentations that its AI-powered facial recognition software was free from gender and racial bias, and two orders in In Re Mobilewalla, Inc. (FTC Dec. 3, 2024), and In RE Gravy Analytics, Inc. (FTC Dec. 3, 2024), requiring data brokers to improve their advertising technology (adtech) privacy and security practices.  These three orders are significant in that they highlight that in 2024, the FTC has significantly increased its enforcement activities in the areas of AI and adtech.

Background

In 2024, the FTC brought and litigated at least 10 enforcement actions involving alleged deception about AI, alleged AI-powered fraud, and allegedly biased AI.  See the FTC’s AI case webpage located here.  This is a fivefold increase from the at least two AI-related actions brought by the FTC last year.  See id.  Just as private class actions involving AI are on the rise, so are the FTC’s AI-related enforcement actions.

This year the FTC also brought and litigated at least 21 enforcement actions categorized by the FTC as involving privacy and security.  See the FTC’s privacy and security webpage located here.  This is about twice the case activity by the FTC in privacy and data security cases compared with 2023.  See id.  Most of these new cases involve alleged unfair use of adtech, an area of recently increased litigation activity in private class actions, as well.

In short, this year the FTC officially achieved its “paradigm shift” of focusing enforcement activities on modern technologies and data privacy, as forecasted in 2022 by the FTC’s Director, Bureau of Consumer Protection, Samuel Levine, here.

All these complaints were brought by the FTC under the FTC Act, under which there is no private right of action.

The FTC’s December 3, 2024 Orders

In Intellivision, the FTC brought an enforcement action against a developer of AI-based facial recognition software embedded in home security products to enable consumers to gain access to their home security systems.  According to the complaint, the developer described its facial recognition software publicly as being entirely free of any gender or racial bias as shown by rigorous testing when, in fact, testing by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) showed that the software was not among the top 100 best performing algorithms tested by NIST in terms of error rates across different demographics, including region of birth and sex.  (Compl. ¶ 11.)  Moreover, according to the FTC, the developer did not possess any of its own testing to support its claims of lack of bias.  Based on these allegations, the FTC brought misrepresentation claims under the FTC Act.  The parties agreed to a consent order, in which the developer agreed to refrain from making any representations about the accuracy, efficacy, or lack of bias of its facial recognition technology, unless it could first substantiate such claims with reliable testing and documentation as set forth in the consent order.  The consent order also requires the developer to communicate the order to any of its managers and affiliated companies in the next 20 years, to make timely compliance reports and notices, and to create and maintain various detailed records, including regarding the company’s accounting, personnel, consumer complaints, compliance, marketing, and testing.

In Mobilewalla and Gravy Analytics, the FTC brought enforcement actions against data brokers who allegedly obtained consumer location data from other data suppliers and mobile applications and sold access to this data for purposes of online advertising without consumers’ consent.  According to the FTC’s complaints, the data brokers engaged in unfair collection, sale, use, and retention of sensitive location information, all in alleged violation of the FTC Act.  The parties agreed to consent orders, in which the data brokers agreed to refrain from collecting, selling, using, and retaining sensitive location information; to establish a Sensitive Location Data Program, Supplier Assessment Program, and a comprehensive privacy program, as detailed in the orders; provide consumers clear and conspicuous notice; provide consumers a means to request data deletion; delete location data as set forth in the order; and perform compliance, recordkeeping, and other activities, as set forth in the order.

Implications For Companies

The FTC’s increased enforcement activities in the areas of adtech and AI serve as a cautionary tale for companies using adtech and AI. 

As the FTC’s recent rulings and its 2024 dockets show, the FTC is increasingly using the FTC Act as a sword against alleged unfair use of adtech and AI.  Moreover, although the December 3 orders do not expressly impose any monetary penalties, the injunctive relief they impose may be costly and, in other FTC consent orders, harsher penalties have included express penalties of millions of dollars and, further, algorithmic disgorgement.  As adtech and AI continue to proliferate, organizations should consider in light of the FTC’s increased enforcement activities in these areas—and in light of the plaintiffs’ class action bar’s and EEOC’s increased activities in these areas, as well, as we blogged about here, here, here, here, and here—whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI and adtech in additional detail.  Doing so could deter or help defend a future enforcement action or class action similar to the many that are being filed today, alleging omission of such additional details, and seeking a wide range of injunctive and monetary relief.

Announcing A New Journal Article By Justin Donoho Of Duane Morris Explaining Best Practices To Mitigate High-Stakes AI Litigation Risk

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Three Best Practices to Mitigate High-Stakes AI Litigation Risk.”  The article is available here and is a must-read for corporate counsel.

Organizations using AI-based technologies that perform facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, generative AI, and more increasingly face the risk of being targeted by class action lawsuits and government enforcement actions alleging that they improperly obtained, disclosed, and misused personal data of website visitors, employees, customers, students, patients, and others, or that they infringed copyrights, fixed prices, and more. These disputes often seek millions or billions of dollars against businesses of all sizes. This article identifies recent trends in such varied but similar AI litigation, draws common threads, and discusses three best practices that corporate counsel should consider to mitigate AI litigation risk: (1) add or update arbitration clauses to mitigate the risks of mass arbitration; (2) collaborate with information technology, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and (3) update notices to third parties and vendor agreements.

Implications For Corporations

Companies using AI technologies face multimillion- or billion-dollar risks of litigation seeking statutory and common-law damages under a wide variety of laws, including privacy statutes, wiretap statutes, unfair and deceptive practices statutes, antidiscrimination statutes, copyright statutes, antitrust statutes, common-law invasion of privacy, breach of contract, negligence, and more.  This article analyzes litigation brought under these laws and offers corporate counsel three best practices to mitigate the risk of similar cases.

Announcing A New ABA Article By Duane Morris Partner Alex Karasik Explaining The EEOC’s Artificial Intelligence Evolution


By Alex W. Karasik

Duane Morris Takeaway: Available now is the recent article in the American Bar Association’s magazine “The Brief” by Partner Alex Karasik entitled “An Examination of the EEOC’s Artificial Intelligence Evolution.[1] The article is available here and is a must-read for all employers and corporate counsel!

In the aftermath of the global pandemic, employee hiring has become a major challenge for businesses across the country, regardless of industry or region. Businesses want to accomplish this goal in the most time- and cost-effective way possible. Employers remain in vigorous pursuit of anything that can give them an edge in recruiting, hiring, onboarding, and retaining the best talent. In 2023, artificial intelligence (AI) emerged as the focal point of that pursuit. The use of AI offers an unprecedented opportunity to facilitate employment decisions. Whether it is sifting through thousands of resumes in a matter of seconds, aggregating information about interviewees’ facial expressions, or generating data to guide compensation adjustments, AI has already had a profound impact on how businesses manage their human capital.

Title VII of the Civil Rights Act of 1964, which is the cornerstone federal employment discrimination law, does not contain statutory language specifically about the use of AI technologies, which did not emerge until several decades later. However, the U.S. Equal Employment Opportunity Commission (EEOC), the federal government agency responsible for enforcing Title VII, has made it a strategic priority to prevent and redress employment discrimination stemming from employers’ use of AI to make employment decisions regarding prospective and current employees.

Focusing on the EEOC’s pioneering efforts in this space, this article explores the risks of using AI in the employment context. First, the article examines the current litigation landscape with an in-depth case study analysis of the EEOC’s first AI discrimination lawsuit and settlement. Next, to figure out how we got here, the article travels back in time through the origins of the EEOC’s AI initiative to present-day outreach efforts. Finally, the article reads the EEOC’s tea leaves about the future of AI in the workplace, offering employers insight into how to best navigate the employment decision-making process when implementing this generation-changing technology.

Implications For Employers: Similar to the introduction of technologies such as the typewriter, computer, internet, and cell phone, there are, understandably, questions and resulting debates about the precise impact that AI will have on the business world, including the legal profession. To best adopt any new technology, one must first invest in understanding how it works. The EEOC has done exactly that over the last several years. The businesses that use AI software to make employment decisions must similarly make a commitment to fully understand its impact, particularly with regard to applicants and employees who are members of protected classes. The employment evolution is here, and those who are best equipped to understand the risks and rewards will thrive in this exciting new era.

[1] THE BRIEF ❭ Winter 2024 An Examination of the EEOC’s Artificial Intelligence Evolution VOLUME 53, NUMBER 2, WINTER 2024. © 2024 BY THE AMERICAN BAR ASSOCIATION. REPRODUCED WITH PERMISSION. ALL RIGHTS RESERVED. THIS INFORMATION OR ANY PORTION THEREOF MAY NOT BE COPIED OR DISSEMINATED IN ANY FORM OR BY ANY MEANS OR STORED IN AN ELECTRONIC DATABASE OR RETRIEVAL SYSTEM WITHOUT THE EXPRESS WRITTEN CONSENT OF THE AMERICAN BAR ASSOCIATION.

Artificial Intelligence Litigation Risks in the Employment Discrimination Context


By Gerald L. Maatman, Jr., Alex W. Karasik, and George J. Schaller

Duane Morris Takeaway: Artificial intelligence took the employment world by storm in 2023, quickly becoming one of the most talked about and debated subjects among corporate counsel across the country. Companies will continue to use AI as a resource to enhance decision-making processes for the foreseeable future as these technologies evolve and take shape in a myriad of employment functions. As these processes are fine-tuned, those who seek to harness the power of AI must be aware of the risks associated with its use. This featured article analyzes two novel AI lawsuits and highlights recent governmental guidance related to AI use. As the impact of AI is still developing, companies should recognize the types of claims apt to be brought for use of AI screening tools in the employment context and the implications of possible discriminatory conduct stemming from these tools.

In the Spring 2024 issue of the Journal of Emerging Issues in Litigation, Duane Morris partners Jerry Maatman and Alex Karasik and associate George Schaller analyze key developments in litigation and enforcement shaping the impact of artificial intelligence in the workplace and its subsequent legal risks. Read the full featured article here.

The Class Action Weekly Wire – Episode 49: 2024 Preview: Consumer Fraud Class Action Litigation

Duane Morris Takeaway: This week’s episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and associate Alessandra Mungioli with their discussion of 2023 developments and trends in consumer fraud class action litigation as detailed in the recently published Duane Morris Consumer Fraud Class Action Review – 2024.

Check out today’s episode and subscribe to our show from your preferred podcast platform: Spotify, Amazon Music, Apple Podcasts, Google Podcasts, the Samsung Podcasts app, Podcast Index, Tune In, Listen Notes, iHeartRadio, Deezer, YouTube or our RSS feed.

Episode Transcript

Jerry Maatman: Welcome loyal blog listeners. Thank you for being on our weekly podcast, the Class Action Weekly Wire. My name is Jerry Maatman, I’m a partner at Duane Morris, and joining me today is my colleague, Alessandra. Thank you for being on our podcast to talk about thought leadership with respect to class actions.

Alessandra Mungioli: Thank you, Jerry. I’m glad to be here.

Jerry: Today we’re going to discuss our recent publication, our e-book on the Duane Morris Consumer Fraud Class Action Review. Listeners can find this book on our blog. Could you tell us a little bit about what readers can expect from this e-book?

Alessandra: Absolutely Jerry. Class action litigation in the consumer fraud space remains a key focus of the plaintiff’s bar. A wide variety of conduct gives rise to consumer fraud claims which typically involve a class of consumers who believe they were participating in a legitimate business transaction, but due to a merchant or a company’s alleged deceptive or fraudulent practices, the consumers were actually being defrauded.

Every state has consumer protection laws, and consumer fraud class actions require courts to analyze these statutes, both with respect to plaintiffs’ claims and also with respect to choice of law analyses when a complaint seeks to impose liability that is predicated on multiple states’ consumer protection laws.

To assist corporate counsel and business leaders with navigating consumer fraud class action litigation, the class action team here at Duane Morris has put together the Consumer Fraud Class Action Review, which analyzes significant rulings, major settlements, and identifies key trends that are apt to impact companies in 2024.

Jerry: This is a great, essential desk reference for practitioners and corporate counsel alike dealing with class actions in this space. Difficult to do in a short podcast, but what are some of the key takeaways in that desk reference?

Alessandra: Just as the type of actionable conduct varies, so, too, do the industries within which consumer fraud claims abound. In the last several years, for example, the beauty and cosmetics industry saw a boom in consumer fraud class actions as consumers demanded increased transparency regarding the ingredients in their cosmetic products and the products’ effects. In 2023, consumer fraud class actions ran the gamut of false advertising and false labeling claims as well.

Artificial intelligence also made its way into the class action arena in the consumer fraud space for the first time in 2023. In MillerKing, LLC, et al. v. DoNotPay Inc., the plaintiff, a Chicago law firm, filed a class action alleging the defendant, an online subscription service that uses “robot lawyers” programmed with AI, was not licensed to practice law and therefore brought claims for consumer fraud, deceptive practices, and breach of trademark. The defendant moved to dismiss the action on the basis that the plaintiff failed to establish an injury-in-fact sufficient to confer standing, which the court granted. The plaintiff asserted that the conduct caused “irreparable harm to many citizens, as well as to the judicial system itself,” and constituted “an infringement upon the rights of those who are properly licensed,” such as “attorneys and law firms.” The court found that the plaintiff failed to demonstrate any real injury per its claims, and granted the defendant’s motion to dismiss.

Jerry: Well, robot lawyers and lawyer bots – that’s quite a development in 2023. How did the plaintiffs’ bar do in – what I consider the Holy Grail in this space – securing class certification, and then conversion of a certified class into a monetary class-wide settlement?

Alessandra: So settlements were very lucrative in 2023. The top 10 consumer fraud class action settlements in 2023 totaled $3.29 billion. And by comparison, the top 10 settlements in 2022 had totaled $8.5 billion, so we have seen a downward trend. Notably, five of these 10 settlements last year took place in California courts. The top settlements in 2023 resolved litigation stemming from a variety of different theories, from smartphone performance issues to the marketing of vape products. Last year, courts granted plaintiffs’ motions for class certification in consumer fraud lawsuits approximately 66% of the time. And the overall certification rate for class actions in 2023 was 72%.

Jerry: Well, that’s quite a litigation scorecard. And this is an area of interest that the class action team at Duane Morris will be following closely and blogging about in 2024. Well, thank you for being with us today and thank you loyal blog readers and listeners for joining our weekly podcast again. You can download the Duane Morris Consumer Fraud Class Action Review off our website. Have a great day!

Alessandra: Thank you!

Report From New York City: U.S. Privacy Laws, A.I. Developments, And Bryan Cranston Take Center Stage At Legalweek 2024


By Alex W. Karasik

Duane Morris Takeaways Privacy and data breach class action litigation are among the key issues that keep businesses and corporate counsel up at night.  There was over $1 billion dollars procured in settlements and jury verdicts over the last year for these types of “bet-the-company” cases.  At the ALM Law.com Legalweek 2024 conference in New York City, Partner Alex W. Karasik of the of the Duane Morris Class Action Defense Group was a panelist at the highly anticipated session, “Trends in US Data Privacy Laws and Enforcement.”  The conference, which had over 6,000 attendees, produced excellent dialogues on how cutting-edge technologies can potentially lead to class action litigation.  While A.I. took the main stage, along with an epic keynote speech from revered actor, Bryan Cranston, privacy and data-management issues were firmly on the radar of attendees.

Legalweek’s robust agenda covered a wide-range of global legal issues, with a prominent focus on the impact of technology and innovation.  Some of the topics included artificial intelligence, data privacy, biometrics, automation, and cybersecurity.  For businesses who deploy these technologies, or are thinking about doing so, this conference was informative in terms of both their utility and risk.  The sessions provided valuable insight from a broad range of constituents, including in-house legal counsel, outside legal counsel, technology vendors, and other key players in the tech and legal industries.

I had the privilege of speaking about how data privacy laws and biometric technology have impacted the class action litigation space.  Joining me on the panel was Christopher Wall (Special Counsel for Global Privacy and Forensics, and Data Protection Officer, HaystackID); Sonia Zeledon (Associate General Counsel Compliance, Risk, Ethics, and Privacy, The Hershey Company); and Pallab Chakraborty (Director of Compliance & Privacy, Xilinx).  My esteemed fellow panelists and I discussed how the emerging patchwork of data privacy laws – both in the U.S. and globally – create compliance challenges for businesses.  I provided insight on how high-stakes biometric privacy class action litigation in Illinois can serve as a roadmap for companies, as similar state statutes are emerging across the country.  In addition, I explored how artificial intelligence tools used in the employee recruitment and hiring processes can further create potential legal risks.  Finally, I shared my prediction of how the intersection of ESG and privacy litigation will continue to emerge as a hot area for class action litigation into 2024 and beyond.

Finally, and probably the most important update to many of you, Bryan Cranston’s keynote address was awesome!  Covering the whole gamut of the emotional spectrum, Bryan was fascinating, inspirational, and hilarious.  Some of the topics he discussed included the importance of family, the future impact of A.I. on the film industry, his mescal brand, and a passionate kiss during his first acting scene at 19.  Bryan was a tough act follow!

Thank you to ALM Law.com, the Legalweek team, my fellow panelists, the inquisitive attendees, the media personnel, and all others who helped make this week special

California Court Dismisses Artificial Intelligence Employment Discrimination Lawsuit

By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller

Duane Morris Takeaways:  In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. Jan 19, 2024) (ECF No. 45), Judge Rita F. Lin of the U.S. District Court for the Northern District of California dismissed a lawsuit against Workday involving allegations that algorithm-based applicant screening tools discriminated applicants on the basis of race, age, and disability. With businesses more frequently relying on artificial intelligence to perform recruiting and hiring functions, this ruling is helpful for companies facing algorithm-based discrimination lawsuits in terms of potential strategies to attack such claims at the pleading stage.

Case Background

Plaintiff, an African-American male over the age of forty with anxiety and depression, alleged that he applied to 80 to 100 jobs with companies that use Workday’s screening tools. Despite holding a bachelor’s degree in finance and an associate’s degree in network systems administration, Plaintiff claimed he did not receive not a single job offer. Id. at 1-2.

On July 19, 2021, Plaintiff filed an amended charge of discrimination with the Equal Employment Opportunity Commission (“EEOC”). On November 22, 2022, the EEOC issued a dismissal and notice of right to sue. On February 21, 2023, Plaintiff filed a lawsuit against Workday, alleging that Workday’s tools discriminated against job applicants who are African-American, over the age of 40, and/or disabled in violation of Title VII, the ADEA, and the ADA, respectively.

Workday moved to dismiss the complaint, arguing that Plaintiff failed to exhaust administrative remedies with the EEOC as to his intentional discrimination claims; and that Plaintiff did not allege facts to state a plausible claim that Workday was liable as an “employment agency” under the anti-discrimination statutes at issue.

The Court’s Decision

The Court granted Workday’s motion to dismiss. First, the Court noted the parties did not dispute that Plaintiff’s EEOC charge sufficiently exhausted the disparate impact claims. However, Workday moved to dismiss Plaintiff’s claims for intentional discrimination under Title VII and the ADEA on the basis of his failure to exhaust administrative remedies. Workday argued that the EEOC charge alleged only claims for disparate impact, not intentional discrimination.

Rejecting Workday’s argument, the Court held that it must construe the language of the EEOC charge with “utmost liberality since they are made by those unschooled in the technicalities of formal pleading.” Id. at 5 (internal quotation marks and citations omitted). The Court acknowledged that the thrust of Plaintiff’s factual allegations in the EEOC charge concerned how Workday’s screening tools discriminated against Plaintiff based on his race and age. However, the Court held that those claims were reasonably related to his intentional discrimination claims, and that the EEOC investigation into whether the tools had a disparate impact or were intentionally biased would be intertwined. Accordingly, the Court denied Workday’s motion to dismiss on the basis of failure to exhaust administrative remedies.

Next, the Court addressed Workday argument that Mobley did not allege facts to state a plausible claim that it was liable as an “employment agency” under the anti-discrimination statutes at issue. The Court opined that Plaintiff did not allege facts sufficient to state a claim that Workday was “procuring” employees for these companies, as required for Workday to qualify as an “employment agency.” Id. at 1. For example, Plaintiff did not allege details about his application process other than that he applied to jobs with companies using Workday, and did not land any job offers. The complaint also did not allege that Workday helped recruit and select applicants.

In an attempt to salvage these defects at the motion hearing and in his opposition brief, Plaintiff identified two other potential legal bases for Workday’s liability — as an “indirect employer” and as an “agent.” Id. To give Plaintiff an opportunity to attempt to correct these deficiencies, the Court granted Workday’s motion to dismiss on this basis, but with leave for Plaintiff to amend. Accordingly, the Court granted in part and denied in part Workday’s motion to dismiss.

Implications For Businesses

Artificial intelligence and algorithm-based applicant screening tools are game-changers for companies in terms of streamlining their recruiting and hiring processes. As this lawsuit highlights, these technologies also invite risk in the employment discrimination context.

For technology vendors, this ruling illustrates that novel arguments about the formation of the “employment” relationship could potentially be fruitful at the pleading stage. However, the Court’s decision to let Plaintiff amend the complaint and have one more bite at the apple means Workday is not off the hook just yet. Employers and vendors of recruiting software would be wise to pay attention to this case  –and the anticipated wave of employment discrimination lawsuits that are apt to be filed – as algorithm-based applicant screening tools become more commonplace.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress