Data Security and Privacy Liability – Takeaways From The Sedona Conference Working Group 11 Annual Meeting in Redmond, WA

By Justin R. Donoho

Duane Morris TakeawaysData privacy and data breach class action litigation continue to explode.  At the Sedona Conference Working Group 11 on Data Security and Privacy Liability, at Microsoft’s campus in Redmond, Washington, on May 7, 2025, Justin Donoho of the Duane Morris Class Action Defense Group served as a dialogue leader for two panel discussions, “Individual Liability for Data Security Failures” and “Privacy and Data Security Litigation Update.”  The working group meeting, which spanned two days and had over 50 participants, produced excellent dialogues on these topics and others including AI statutory guidance, shifting U.S. federal regulatory priorities in the privacy and data security landscape, privacy and data security state regulator roundtable, emerging issues and trends in the cyber threat landscape, and law firm data security.

The Conference’s robust agenda featured over 30 dialogue leaders from a wide array of backgrounds, including government officials, data security industry experts, a district court judge, in-house attorneys, cyber and data privacy law professors, plaintiffs’ attorneys, and defense attorneys.  In a masterful way, the agenda provided valuable insights for participants toward this working group’s mission, which is to identify and comment on trends in data security and privacy law, in an effort to help organizations prepare for and respond to data breaches, and to assist attorneys and judicial officers in resolving questions of legal liability and damages.

Justin had the privilege of speaking about current trends in cases seeking individual liability for data security failures and in data privacy class actions.  A few of the highlights from his presentations included discussing the SEC’s case brought against SolarWinds’ CISO Michael Brown, which has CISOs worldwide on the edges of their seats (discussed in Justin’s article here), and two recent cases resulting in helpful precedent for defendants facing cases alleging privacy violations for their uses of website advertising technologies (adtech), including a case that disposed of an adtech class action due to consent by browsewrap (see here), and a case that dismissed an adtech class action due to ambiguities found in a wiretap statute (see here).

Finally, one of the greatest joys of participating in Sedona Conference meetings is the opportunity to draw on the wisdom of fellow presenters and other participants from around the globe.  Highlights included:

  1. A lively dialogue among some of my panelists and other participants regarding trends in decisions regarding Article III standing and the costs and benefits defendants should consider when deciding whether to seek dismissal due to plaintiffs’ lack of Article III standing.
  2. State regulators giving candid advice regarding what and what not to do following data breaches in terms of notifying their offices, participating in investigations, and attempting to negotiate settlements. 
  3. Experts of all stripes dissecting the Colorado Privacy Act, Colorado AI Act, and those statutes’ application to AI hiring tools in an effort to offer guidance to future legislators drafting similar statutes.
  4. Seasoned defense attorneys discussing how federal agencies responsible for rules regarding privacy and data security have responded to the new presidential administration’s “Regulatory Freeze Pending Review” memorandum, the personnel changes, actions, and reviews taken during the first months of the new administration, and the implications for regulated organizations.
  5. Cyber and cyber insurance experts leading a dialogue about emerging risks, regulatory challenges, liability concerns, and underwriting processes relating to cybersecurity.
  6. Law firm consultants addressing current issues with AI that law firms should consider when crafting their cybersecurity assessments, policies, and procedures.

Thank you to the Sedona Conference Working Group 11 and its incredible team, the fellow dialogue leaders, the engaging participants, and all others who helped make this meeting in Redmond, Washington, an informative and unforgettable experience.

For more information on the Duane Morris Class Action Group, including its Data Privacy Class Action Review e-book, and Data Breach Class Action Review e-book, please click the links here and here.

Visualize This:  The Sixth Circuit Holds That The VPPA Applies Only To Consumers Of Audio-Visual Materials

By Gerald L. Maatman, Jr., Shannon Noelle, and Ryan T. Garippo

Duane Morris Takeaways:  On April 3, 2025, in Salazar, et al. v. Paramount Global, d/b/a 247Sports, Case No. 23-5748, 2025 WL 1000139 (6th Cir. Apr. 3, 2025), the Sixth Circuit departed from two other federal circuits (i.e., the Second and Seventh Circuits) in its interpretation of “consumers” covered by the Video Privacy Protection Act (“VPPA”), and affirmed the district court’s dismissal of a putative class action on the basis that only consumers of audio-visual related materials are covered by the protections of the Act.  The Sixth Circuit’s holding narrows the scope and reach of the statute and is a welcome reprieve for companies offering video content on their websites in connection with advertising technology (“adtech”).

Background

In September 2022, Michael Salazar brought a putative class action against Paramount Global (i.e., the owner of 247Sports.com), claiming that the media company violated the VPPA because it installed Meta Pixel on its website. Salazar alleged that Meta Pixel, a form of adtech, tracked his and putative class members’ video viewing history and disclosed it to Meta without his consent.  He sought to represent a putative class of subscribers to 247Sports.com’s newsletter which contained links to articles (that could contain videos), photographs, and other content.

Salazar, however, did not allege that he was a subscriber of audio visual materials as contemplated by the statute.  18 U.S.C. § 2710(a)(1)-(4).  To the contrary, he alleged that he was a subscriber of 247Sports.com’s newsletter, and that 247Sports.com separately provided audio visual materials to its customers.  Salazar v. Paramount Global, 683 F.Supp. 3d 727, 744 (M.D. Tenn. 2023).  But, the district court determined that Salazar’s interpretation of the VPPA was “unavailing.”  Id.  Indeed, “there [was] no allegation in the complaint that Plaintiff accessed audio visual content through the newsletter (or at all, for that matter).  The newsletter [was] therefore not audio visual content, which necessarily means that Plaintiff [was] not a ‘subscriber’ under the VPPA.”  Id.

Salazar is no stranger to this legal issue.  Last year, in a virtually identical case, the U.S. District Court for the Southern District of New York, dismissed a putative VPPA class action brought by Salazar on the basis that “signing up for an online newsletter did not make Salazar a VPPA subscriber.’”  Salazar v. National Basketball Association, 118 F.4th 533, 536-37 (2d Cir. 2024).  Salazar appealed that decision to the Second Circuit, which reversed the lower court, and held that the VPPA protects “consumers regardless of the particular goods or services rented, purchased, or subscribed to.”  Id. at 549.  If blog readers would like to learn more about the Second Circuit’s decision, a link to our post is included here.

Salazar appealed this case on the same grounds as his Second Circuit win and asked the Sixth Circuit to determine whether he was considered a “subscriber” and thus, a “consumer” under the VPPA.

The Sixth Circuit’s Decision

The Sixth Circuit affirmed the district court’s ruling and agreed that to be considered a “consumer” under the VPPA an individual must purchase goods or services of an audio-visual nature.

Judge John Nalbandian, writing for the Sixth Circuit, reasoned that the term “subscriber” must be viewed in its broader context, and in harmony with the other words in the statute such not to render associational words inconsistent or superfluous.  Applying these canons, the Sixth Circuit explained that the words “goods and services” informed the meaning of the term “subscriber.”  By using the terms together, the statute was intended to encompass only audio-visual goods or services provided by a video tape service provider, as opposed to any and all goods and services, provided by that company.  In other words, if a video tape service provider makes “hammers” or a “Flintstones sweatshirt or a Scooby Doo coffee mug,” a consumer of such goods would not fall under the purview of the VPPA.  Paramount Global, 2025 WL 100139, at *10.

In so holding, the Sixth Circuit departed from the Second and Seventh Circuits, including the near-identical lawsuit brought by Salazar himself, that found the phrase “goods or services” to encompass all goods and services that a provider places in the marketplace.  Judge Rachel Bloomekatz, penning the dissent, reached the same conclusion.  She opined that, under the majority’s interpretation, a provider could “stitch[] together” non-video transactions to provide information about audio-visual transactions that could reveal a consumer’s personal information.  Id. At *12.  The majority found such concerns unavailing and reasoned that the type of information available from the videos on Paramount Global’s website was not inherent to the newsletter and was “accessible to anyone, even those without a newsletter subscription.”  Id. at *7.

As a result, the Sixth Circuit affirmed the district court’s decision to dismiss the complaint without leave to amend.

Implications For Companies

Circuit splits in the federal courts are increasingly rare.  It is nearly unprecedented, however, to have a situation where one litigant has created a federal circuit split with himself.  Salazar could file one lawsuit in New York and his claims would go forward.  But, if the exact same lawsuit was filed in Tennessee, then dismissal would be the proper remedy.

This patchwork system may be difficult for corporate counsel, tasked with ensuring their companies’ adtech compliance, to follow.  But, the Sixth Circuit’s decision in Paramount Global is better than the alternative and could pave the way for other circuits to similarly limit the scope of the VPPA in their relevant jurisdictions.

In the meantime, however, corporate counsel for companies based in Kentucky, Michigan, Ohio, and Tennessee can rest a little easier knowing that – they can offer newsletters without worrying that adtech, installed solely on their websites – will somehow subject them to draconian VPPA liability.

Federal Court Holds Illinois Genetic Privacy Claim Not Preempted By Federal Transportation Regulations

By Justin Donoho, Gerald L. Maatman, Jr., and Tyler Zmick

Duane Morris Takeaways:  In Short v. MV Transportation, Inc., No. 24-CV-3019 (N.D. Ill. Mar. 10, 2025), Judge Manish S. Shah of the U.S. District Court for the Northern District of Illinois denied defendant’s bid to dismiss a claim brought under the Illinois Genetic Information Privacy Act (“GIPA”).  In his ruling, Judge Shah acknowledged that U.S. Department of Transportation regulations require companies in the transportation industry (including defendant) to ensure their drivers satisfy certain physical qualification criteria.  The Court nonetheless rejected defendant’s argument that the regulations preempt the GIPA because they do not specifically require employers to ask applicants about their family medical histories (which the GIPA prohibits).  In other words, the Court denied defendant’s motion to dismiss because the GIPA does not make it “physically impossible” to comply with federal regulations. 

Background

Plaintiff Kevin Short alleged that he applied for a position as a driver for Defendant MV Transportation, Inc., a company that provides paratransit services.  As part of the application process, Plaintiff was required to complete a physical examination during which he was asked about his family medical history, including whether his family members had a history of high blood pressure, heart disease, or diabetes.

Plaintiff subsequently sued MV Transportation under the GIPA, alleging that the company violated Section 25(c)(1) of the statute by “solicit[ing], request[ing], [or] requir[ing] . . . genetic information of a person or a family member of the person . . . as a condition of employment [or] preemployment application.”  410 ILCS 513/25(c)(1).

MV Transportation moved to dismiss the complaint on the basis that the Department of Transportation’s (“DOT”) regulations preempted Plaintiff’s GIPA claim.  Specifically, MV Transportation argued that Plaintiff’s claim was barred under a “conflict preemption” theory because allowing the claim to proceed would force MV Transportation to choose between complying with the GIPA or complying with federal requirements to “conduct[ ] thorough physical examinations of its drivers.”

MV Transportation pointed to the Motor Carrier Safety Act for support, under which the DOT regulates commercial motor vehicle safety by promulgating “minimum safety standards” to ensure that “the physical condition of operators . . . is adequate to enable them to operate the vehicles safely” – including by requiring drivers to satisfy 13 “physical qualification criteria.”  49 U.S.C. § 31136(a)(3).

The Court’s Decision

In denying MV Transportation’s motion, the Court noted that conflict preemption applies only where “compliance with both federal and state regulations is a physical impossibility” or where the state law “stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress.”  Id. at 6-7 (citations omitted); see also id. at 6 (noting that “‘[i]nvoking some brooding federal interest’ is insufficient to establish preemption; instead, MV Transportation must identify ‘a constitutional text or a federal statute’ that displaces or conflicts with the state law”) (quoting Virginia Uranium, Inc. v. Warren, 587 U.S. 761, 767 (2019)).  The Court further observed that MV Transportation had the burden of overcoming the “presumption against preemption.”

In its ruling, the Court concluded that it is not physically impossible for MV Transportation to simultaneously comply with the GIPA and DOT regulations relative to Plaintiff’s pre-employment health screening because the DOT regulations do not specifically require any inquiry into a driver’s family medical history.  MV Transportation asserted that DOT regulations nonetheless “contemplate[] that medical examiners may discuss” a person’s family medical history during a physical exam.  The Court was not persuaded, however, stating that such a scenario is “not enough to suggest that compliance with GIPA and the federal regulations is ‘physically impossible.’”  Id. at 9 (“The mere possibility that a medical examiner asks for information protected by GIPA while performing an examination does not demonstrate impossibility to comply with both federal and state law.”). 

The Court similarly held that the GIPA is not an obstacle to the execution of Congress’s purposes, as reflected in the Motor Carrier Safety Act and DOT regulations.  As support for this conclusion, the Court observed that the relevant DOL regulations and the GIPA serve different purposes – the regulations are meant to promote the safe operation of commercial motor vehicles, while the GIPA focuses on health information privacy. 

Implications Of The Decision

Short v. MV Transportation is one of several recent decisions in which courts denied bids to dismiss GIPA claims at the pleading stage. 

Given this litigation landscape and the statute’s strict penalty provision – under which statutory damages can quickly become significant ($2,500 per negligent violation and $15,000 per intentional or reckless violation, see 410 ILCS 513/40(a)(1)-(2)) – employers should ensure they comply with the statute regarding any health screenings they ask applicants or employees to complete (including by explicitly advising applicants and employees not to disclose their family medical histories during the screenings).

It’s Here! The Duane Morris Privacy Class Action Review – 2025

By Gerald L. Maatman, Jr., Jennifer A. Riley, Alex W. Karasik, Gregory Tsonis, Justin Donoho, and Tyler Zmick

Duane Morris Takeaways: The last year saw a virtual explosion in privacy class action litigation. As a result, compliance with privacy laws in the myriad of ways that companies interact with employees, customers, and third parties is a corporate imperative. To that end, the class action team at Duane Morris is pleased to present the second edition of the Privacy Class Action Review – 2025. This publication analyzes the key privacy-related rulings and developments in 2024 and the significant legal decisions and trends impacting privacy class action litigation for 2025. We hope that companies and employers will benefit from this resource in their compliance with these evolving laws and standards.

Click here to bookmark or download a copy of the Privacy Class Action Review – 2025 e-book. Look forward to an episode on the Review coming soon on the Class Action Weekly Wire!

Ninth Circuit Dismisses Adtech Class Action For Lack Of Standing

By Gerald L. Maatman, Jr. and Justin Donoho

Duane Morris Takeaways:  On December 17, 2024, in Daghaly, et al. v. Bloomingdales.com, LLC, No. 23-4122, 2024 WL 5134350 (9th Cir. Dec. 17, 2024), the Ninth Circuit ruled that a plaintiff lacked Article III standing to bring her class action complaint alleging that an online retailer’s use of website advertising technology disclosed website visitors’ browsing activities in violation of the California Invasion of Privacy Act and other statutes.  The ruling is significant because it shows that adtech claims cannot be brought in federal court without specifying the plaintiffs’ web browsing activities allegedly disclosed. 

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today.

In Daghaly, Plaintiff brought suit against an online retailer.  According to Plaintiff, the retailer installed the Meta Pixel and other adtech on its public-facing website and thereby transmitted web-browsing information entered by visitors such as which products the visitor clicked on and whether the visitor added the product to his or her shopping cart or wish list.  Id., No. 23-CV-129, ECF No. 1 ¶¶ 44-45.  As for Plaintiff herself, she did not allege what she clicked on or what her web browsing activities entailed upon visiting the website, only that she accessed the website via the web browser on her phone and computer.  Id. ¶ 40.

Based on these allegations, Plaintiff alleged claims for violation of the California Invasion of Privacy Act (CIPA) and other statutes.  The district court dismissed the complaint for lack of personal jurisdiction.  Id., 697 F. Supp. 3d 996 (S.D. Cal. 2023).  Plaintiff appealed and, in its appellate response brief, the retailer argued for the first time that Plaintiff lacked Article III standing.

The Ninth Circuit’s Opinion

The Ninth Circuit agreed with the retailer, found that Plaintiff lacked standing, and remanded for further proceedings.

To allege Article III standing, as is required to bring suit in federal court, the Ninth Circuit opined that a plaintiff must “clearly allege facts demonstrating” that she “suffered an injury in fact that is concrete, particularized, and actual or imminent.”  Id., 2024 WL 5134350, at *2 (citing, e.g., TransUnion LLC v. Ramirez, 594 U.S. 413, 423 (2021)). 

Plaintiff argued that she sufficiently alleged standing via her allegations that she “visited” and “accessed” the website and was “subjected to the interception of her Website Communications.”  Id. at *1.  Moreover, Plaintiff argued, the retailer’s alleged disclosure to adtech companies of the fact of her visiting the retailer’s website sufficiently alleged an invasion of her privacy and thereby invoked Article III standing because the adtech companies could use this fact to stitch together a broader, composite picture of Plaintiffs’ online activities.  See oral argument, here.

The Ninth Circuit rejected these arguments. It found that Plaintiff “does not allege that she herself actually made any communications that could have been intercepted once she had accessed the website. She does not assert, for example, that she made a purchase, entered text, or took any actions other than simply opening the webpage and then closing it.”  Id., 2024 WL 5134350, at *1.As the Ninth Circuit explained during oral argument by way of example, it is not like the Plaintiff had alleged that she was shopping for underwear and that the retailer transmitted information about her underwear purchases.  Moreover, the Ninth Circuit found “no authority suggesting that the fact that she visited [the retailer’s website] (as opposed to information she might have entered while using the website) constitutes ‘contents’ of a communication within the meaning of CIPA Section 631.”  Id.

In short, the Ninth Circuit concluded that Plaintiff lacked Article III standing, and that this conclusion followed from Plaintiff’s failure to sufficiently allege the nature her web browsing activities giving rise to all of her statutory claims.  Id. at *2.  The Ninth Circuit remanded with instructions that the district court grant leave to amend if properly requested. 

Implications For Companies

The holding of Daghaly is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have found that an adtech plaintiff’s failure to identify what allegedly private information allegedly was disclosed via the adtech warrants dismissal under Rule 12(b)(6) for failure to plausibly plead various statutory and common-law claims.  See, e.g, our blog post about such a decision here.   Daghaly shows that adtech plaintiffs also need to identify what allegedly private information beyond the fact of a visit to an online retailer’s website was allegedly disclosed via the adtech, in order to have Article III standing to bring their federal lawsuit in the first place.

Florida Federal Court Refuses To Certify Adtech Class Action

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan K. Norimoto

Duane Morris Takeaways:  On October 1, 2024, Judge Robert Scola of the U.S. District Court for the Southern District of Florida denied class certification in a case involving website advertising technology (“adtech”) in Martinez v. D2C, LLC, 2024 WL 4367406 (S.D. Fla. Oct. 1, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of adtech cannot satisfy Rule 23’s numerosity requirement merely by showing the presence of adtech on a website and numerous visitors to that website.  The Court’s reasoning in denying class certification applies not only in adtech cases raising claims brought under the Video Privacy Protection Act (“VPPA”), like this one, but also to other adtech cases raising a wide variety of other statutory and common law legal theories.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on millions of corporate, governmental, and other websites in operation today.

In Martinez, the plaintiffs brought suit against D2C, LLC d/b/a Univision NOW (“Univision”), an online video-streaming service.  The parties did not dispute, at least for the purposes of class certification, that: (A) Univision installed the Meta Pixel on its video-streaming website; (B) Univision was a “video tape service provider” and the plaintiffs and other Univision subscribers were “consumers” under the VPPA, thereby giving rise to liability under that statute if the plaintiffs could show Univision transmitted their personally identifiable information (PII) such as their Facebook IDs along with the videos they accessed to Meta without their consent; (C) none of the plaintiffs consented; and (D) 35,845 subscribers viewed at least one video on Univision’s website.  Id. at *2. 

The plaintiffs moved for class certification under Rule 23.  The plaintiffs maintained that that at least 17,000 subscribers, including (or in addition to) them, had their PII disclosed to Meta by Univision.  Id. at *3.  The plaintiffs reached this number upon acknowledging “at least two impediments to a subscriber’s viewing information’s being transmitted to Meta: (1) not having a Facebook account; and (2) using a browser that, by default, blocks the Pixel.”  Id. at *6.  Thus, the plaintiffs pointed to “statistics regarding the percentage of people in the United States who have Facebook accounts (68%) and the testimony of their expert … regarding the percentage of the population who use a web browser that would not block the Pixel transmission (70%), to conclude, using ‘basic math,’ that the class would be comprised of ‘at least approximately 17,000 individuals.’” Id. at *6.In contrast, Univision maintained that the plaintiffs failed to carry their burden of showing that even a single subscriber had their PII disclosed, including the three named plaintiffs.  Id. at *3.

The Court’s Decision

The Court agreed with Univision and held that the plaintiffs did not carry their burden of showing numerosity.

First, the Court held that the plaintiffs’ reliance on statistics regarding percentage of people who have Facebook accounts was unhelpful, because “being logged in to Facebook”—not just having an account—“is a prerequisite to the Pixel disclosing information.”  Id. at *7 (emphasis in original).  Moreover, “being simultaneously logged in to Facebook is still not enough to necessarily prompt a Pixel transmission: a subscriber must also have accessed the prerecorded video on Univision’s website through the same web browser and device through which the subscriber (and not another user) was logged into Facebook.”  Id.

Second, the Court held that the plaintiffs’ reliance on their proffer that 70% of people use Google Chrome and Microsoft Edge, which allow Pixel transmission “under default configurations,” failed to account for all of the following “actions a user can take that would also block any Pixel transmission to Meta: enabling a browser’s third-party cookie blockers; setting a browser’s cache to ‘self-destruct’; clearing cookies upon the end of a browser session; and deploying add-on software that blocks third-party cookies.”  Id.

In short, the Court reasoned that the plaintiffs did not establish “the means to make a supported factual finding, that the class to be certified meets the numerosity requirement.”  Id. at *9.  Moreover, the Court found that the plaintiffs had not demonstrated that “any” PII had been disclosed, including their own.  Id. (emphasis in original).In reply, the plaintiffs attempted to introduce evidence supplied by Meta that one of the plaintiffs’ PII had been transmitted to Meta.  Id.  The court refused to consider this new information, supplied for the first time on reply, and further found that even if it were to consider the new evidence, “this only gets the Plaintiffs to one ‘class member.’”  Id. at *10 (emphasis in original).

Finding the plaintiffs’ failure to satisfy the numerosity requirement dispositive, the Court declined to evaluate the other Rule 23 factors.  Id. at *5.

Implications For Companies

This case is a win for defendants of adtech class actions.  In such cases, the Martinez decision can be cited as useful precedent for showing that the numerosity requirement is not met where plaintiffs put forth only speculative evidence as to whether the adtech disclosed plaintiffs’ and alleged class members’ PII to third parties.  The Court’s reasoning in Martinez applies not only in VPPA cases but also other adtech cases alleging claims for invasion of privacy, under state and federal wiretap acts, and more.  All these legal theories have adtech’s transmission of the PII to third parties as a necessary element.  In sum, to establish numerosity, plaintiffs must demonstrate, at a minimum, that class members were logged into their own adtech accounts at the time they visited the defendants’ website, using the same device and browser for the adtech and the visit, using a browser that did not block the transmission by default, and not deploying any number of browser settings and add-on software that would have blocked the transmission.

Georgia Federal Court Dismisses Data Privacy Class Action Against Healthcare Company For Failure To Sufficiently Allege Any Invasion Of Privacy, Damages, Or Wiretap Violation

By Gerald L. Maatman, Jr., Justin Donoho, and Ryan T. Garippo

Duane Morris Takeaways:  On August, 2024, in T.D. v. Piedmont Healthcare, Inc., No. 23-CV-5416 (N.D. Ga. Aug. 24, 2024), Judge Thomas Thrash of the U.S. District Court for the Northern District of Georgia dismissed in its entirety a class action complaint alleging that a healthcare company’s use of website advertising technology installed in its MyChart patient portal disclosed the plaintiffs’ private information in commission of the common law torts of invasion of privacy, breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, and in violation of the Federal Wiretap Act.  The ruling is significant because it shows that such claims cannot surmount Rule 12(b)(6)’s plausibility standard for legal reasons broadly applicable to a wide range of adtech class actions currently on file in many jurisdictions across the nation.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  As the Court explained, “cases like this have sprouted like weeds in recent years.”  Id. at 5.

In Piedmont, Plaintiffs brought suit against Piedmont Healthcare, Inc. (“Piedmont”).  According to Plaintiffs, Piedmont installed the Meta Pixel on its public-facing website and its secure patient portal, and thereby transmitted to Meta Plaintiffs’ “personally identifiable information (PII) and protected health information (PHI) without their consent.” Id. at 1-2.

Based on these allegations, Plaintiffs alleged claims for invasion of privacy, breach of fiduciary duty, negligence, breach of contract, unjust enrichment, and violation of the Electronic Communications Privacy Act (“ECPA”).  Piedmont moved to dismiss under Rule 12(b)(6) for failure to state sufficient facts that, if accepted as true, would state a claim for relief that is plausible on its face.

The Court’s Opinion

The Court agreed with Piedmont and dismissed all of Plaintiffs’ claims.

To state a claim for invasion of privacy, Plaintiffs were required to allege facts sufficient to show “an unreasonable and highly offensive intrusion upon another’s seclusion.”  Id. at 5.  Plaintiffs argued that Piedmont intruded upon their privacy by using the Meta Pixel to secretly transmit their PII and PHI to a third party for commercial gain.  Id. at 4.  Piedmont argued that these allegations failed to plausibly plead an intrusion or actionable intent, or that any intrusion was reasonably offensive or objectionable.  Id.  The Court concluded that “it seems that the weight of authority in similar pixel tracking cases is now solidly in favor of Piedmont’s argument. There is no intrusion upon privacy when a patient voluntarily provides personally identifiable information and protected health information to his or her healthcare provider.”  Id. at 5-6 (collecting cases).  The Court further commented that “it is widely understood that when browsing websites, your behavior may be tracked, studied, shared, and monetized. So it may not come as much of a surprise when you see an online advertisement for fertilizer shortly after searching for information about keeping your lawn green.”  Id. at 3-4.

To state claims for breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, one of the elements a plaintiff much allege is damages or, relatedly, enrichment.  Id. at 7-10.  Plaintiffs argued that they alleged seven categories of damages, as follows: “(i) invasion of privacy, including increased spam and targeted advertising they did not ask for; (ii) loss of confidentiality; (iii) embarrassment, emotional distress, humiliation and loss of enjoyment of life; (iv) lost time and opportunity costs associated with attempting to mitigate the consequences of the disclosure of their Private Information; (v) loss of benefit of the bargain; (vi) diminution of value of Private Information and (vii) the continued and ongoing risk to their Private Information.”  Id. at 9.  Piedmont argued that these damages theories stemming from “the provision of encrypted information only to Facebook” were implausible.  Id. at 7.  The Court agreed with Piedmont, rejected all of Plaintiffs’ damages theories.  Accordingly, it dismissed the remainder of Plaintiffs’ common-law claims.  As the Court explained: “No facts are alleged that would explain how receiving targeted advertisements from Facebook and Piedmont would plausibly cause any of the Plaintiffs to suffer these damages. This is not a case where the Plaintiffs’ personal information was stolen by criminal hackers with malicious intent. The Plaintiffs received targeted advertisements because they are Facebook users and have Facebook IDs. The Court finds the Plaintiffs’ damages theories untenable. Indeed, this court has rejected many identical theories arising under similar circumstances.”  Id. (collecting cases)

To state a claim for violation of the ECPA, also known as the federal wiretap act, a plaintiff must show an intentional interception of the contents of an electronic communication.  Id. at 11.  The ECPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication “unless such communication is intercepted for the purposes of committing a criminal or tortious act in violation of the Constitution or laws of the United States or any State.”  18 U.S.C. § 2511(2)(d)); 18 U.S.C. § 2511(2)(d).  Piedmont argued that it could not have intercepted the same transmission it received on its website, nor could it have acted with a tortious or criminal purpose in seeking to drive marketing and revenue.  Id. at 10-11.  In response, the Plaintiffs contended that they stated a plausible ECPA claim, arguing that Piedmont intercepted the contents of their PII and PHI when it acquired such information through the Meta Pixel on its website and that the party exception is inapplicable because Piedmont acted with criminal and tortious intent in “wiretapping” their PII and PHI.  Id. at 11.  The Court concisely concluded: “As was the case in the invasion of privacy context, the weight of persuasive authority in similar pixel tracking cases supports Piedmont’s position.”  Id. at 11-12 (collecting cases).

Implications For Companies

The holding of Piedmont is a win for adtech class action defendants and should be instructive for courts around the country.  While many adtech cases around the country have made it past a motion to dismiss, many have not, and, for many which continue to be filed regularly, it remains to be seen, Piedmont provides powerful precedent for any company defending against adtech class action claims for invasion of privacy, common-law claims for damages or unjust enrichment, and alleged violation of the federal wiretap act.

Illinois Federal Court Dismisses Class Action Privacy Claims Involving Use Of Samsung’s “Gallery” App

By Tyler Zmick, Justin Donoho, and Gerald L. Maatman, Jr.

Duane Morris Takeaways:  In G.T., et al. v. Samsung Electronics America, Inc., et al., No. 21-CV-4976, 2024 WL 3520026 (N.D. Ill. July 24, 2024), Judge Lindsay C. Jenkins of the U.S. District Court for the Northern District of Illinois dismissed claims brought under the Illinois Biometric Information Privacy Act (“BIPA”).  In doing so, Judge Jenkins acknowledged limitations on the types of conduct (and types of data) that can subject a company to liability under the statute.  The decision is welcome news for businesses that design, sell, or license technology yet do not control or store any “biometric” data that may be generated when customers use the technology.  The case also reflects the common sense notion that a data point does not qualify as a “biometric identifier” under the BIPA if it cannot be used to identify a specific person.  G.T. v. Samsung is required reading for corporate counsel facing privacy class action litigation.

Background

Plaintiffs — a group of Illinois residents who used Samsung smartphones and tablets — alleged that their respective devices came pre-installed with a “Gallery application” (the “App”) that can be used to organize users’ photos.  According to Plaintiffs, whenever an image is created on a Samsung device, the App automatically: (1) scans the image to search for faces using Samsung’s “proprietary facial recognition technology”; and (2) if it detects a face, the App analyzes the face’s “unique facial geometry” to create a “face template” (i.e., “a unique digital representation of the face”).  Id. at *2.  The App then organizes photos based on images with similar face templates, resulting in “pictures with a certain individual’s face [being] ‘stacked’ together on the App.”  Id.

Based on their use of the devices, Plaintiffs alleged that Samsung violated §§ 15(a) and 15(b) of the BIPA by: (1) failing to develop a written policy made available to the public establishing a retention policy and guidelines for destroying biometric data, and (2) collecting Plaintiffs’ biometric data without providing them with the requisite notice and obtaining their written consent.

Samsung moved to dismiss on two grounds, arguing that: (1) Plaintiffs did not allege that Samsung “possessed” or “collected” their biometric data because they did not claim the data ever left their devices; and (2) Plaintiffs failed to allege that data generated by the App qualifies as “biometric identifiers” or “biometric information” under the BIPA, because Samsung cannot use the data to identify Plaintiffs or others appearing in uploaded photos.

The Court’s Decision

The Court granted Samsung’s motion to dismiss on both grounds.

“Possession” And “Collection” Of Biometric Data

Regarding Samsung’s first argument, the Court began by explaining what it means for an entity to be “in possession of” biometric data under § 15(a) and to “collect” biometric data under § 15(b).  The Court observed that “possession” occurs when an entity exercises control over data or holds it at its disposal.  Regarding “collection,” the Court noted that the term “collect,” and the other verbs used in § 15(b) (“capture, purchase, receive through trade, or otherwise obtain”), all refer to an entity taking an “active step” to gain control of biometric data.

The Court proceeded to consider Plaintiffs’ contention that Samsung was “in possession of” their biometrics because Samsung controls the proprietary software used to operate the App.  The Court sided with Samsung, however, concluding that Plaintiffs failed to allege “possession” (and thus failed to state a § 15(a) claim) because they did not allege that Samsung can access the data (as opposed to the technology Samsung employs).  Id. at *9 (“Samsung controls the App and its technology, but it does not follow that this control gives Samsung dominion over the Biometrics generated from the App, and plaintiffs have not alleged Samsung receives (or can receive) such data.”).

As for § 15(b), the Court rejected Plaintiffs’ argument that Samsung took an “active step” to “collect” their biometrics by designing the App to “automatically harvest[] biometric data from every photo stored on the Device.”  Id. at *11.  The Court determined that Plaintiffs’ argument failed for the same reason their § 15(a) “possession” argument failed.  Id. at *11-12 (“Plaintiffs’ argument again conflates technology with Biometrics. . . . Plaintiffs do not argue that Samsung possesses the Data or took any active steps to collect it.  Rather, the active step according to Plaintiffs is the creation of the technology.”).

“Biometric Identifiers” And “Biometric Information”

The Court next turned to Samsung’s second argument for dismissal – namely, that Plaintiffs failed to allege that data generated by the App is “biometric” under the BIPA because Samsung could not use it to identify Plaintiffs (or others appearing in uploaded photos).

In opposing this argument, Plaintiffs asserted that: (1) the “App scans facial geometry, which is an explicitly enumerated biometric identifier”; and (2) the “mathematical representations of face templates” stored through the App constitute “biometric information” (i.e., information “based on” scans of Plaintiffs’ “facial geometry”).  Id. at *13.

The Court ruled that “Samsung has the better argument,” holding that Plaintiffs’ claims failed because Plaintiffs did not allege that Samsung can use data generated through the App to identify specific people.  Id. at *15.  The Court acknowledged that cases are split “on whether a plaintiff must allege a biometric identifier can identify a particular individual, or if it is sufficient to allege the defendant merely scanned, for example, the plaintiff’s face or retina.”  Id. at *13.  After employing relevant principles of statutory interpretation, the Court sided with the cases in the former category and opined that “the plain meaning of ‘identifier,’ combined with the BIPA’s purpose, demonstrates that only those scans that can identify an individual qualify.”  Id. at *15.

Turning to the facts alleged in the Complaint, the Court concluded that Plaintiffs failed to state claims under the BIPA because the data generated by the App does not amount to “biometric identifiers” or “biometric information” simply because the data can be used to identify and group the unique faces of unnamed people.  In other words, biometric information must be capable of recognizing an individual’s identity – “not simply an individual’s feature.”  Id. at *17; see also id. at *18 (noting that Plaintiffs claimed only that the App groups unidentified faces together, and that it is the device user who can add names or other identifying information to the faces).

Implications Of The Decision

G.T. v. Samsung is one of several recent decisions grappling with key questions surrounding the BIPA, including questions as to: (1) when an entity engages in conduct that rises to the level of “possession” or “collection” of biometrics; and (2) what data points qualify (and do not qualify) as “biometric identifiers” and “biometric information” such that they are subject to regulation under the statute.

Regarding the first question, the Samsung case reflects the developing majority position among courts – i.e., a company is not “in possession of,” and has not “collected,” data that it does not actually receive or access, even if it created and controlled the technology that generated the allegedly biometric data.

As for the second question, the Court’s decision in Samsung complements the Ninth Circuit’s recent decision in Zellmer v. Meta Platforms, Inc., where it held that a “biometric identifier” must be capable of identifying a specific person.  See Zellmer v. Meta Platforms, Inc., 104 F.4th 1117, 1124 (9th Cir. 2024) (“Reading the statute as a whole, it makes sense to impose a similar requirement on ‘biometric identifier,’ particularly because the ability to identify did not need to be spelled out in that term — it was readily apparent from the use of ‘identifier.’”).  Courts have not uniformly endorsed this reading, however, and parties will likely continue litigating the issue unless and until the Illinois Supreme Court provides the final word on what counts as a “biometric identifier” and “biometric information.”

California Federal Court Denies Motion To Dismiss Artificial Intelligence Employment Discrimination Lawsuit

By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller

Duane Morris Takeaways:  In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80)Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.  

This development follows Workday’s first successful Motion to Dismiss, which we blogged about here, and the EEOC’s amicus brief filing, which we blogged on here

Case Background

Plaintiff is an African American male over the age of 40, with a bachelor’s degree in finance from Morehouse College, an all-male Historically Black College and University, and an honors graduate degree. Id. at 2. Plaintiff also alleges he suffered from anxiety and depression.  Since 2017, Plaintiff applied to over 100 jobs with companies that use Workday’s screening tools.  In many applications, Plaintiff alleges he was required to take a “Workday-branded assessment and/or personality test.”  Plaintiff asserts these assessments “likely . . . reveal mental health disorders or cognitive impairments,” so others who suffer from anxiety and depression are “likely to perform worse  … and [are] screened out.”  Id. at 2-3.  Plaintiff was allegedly denied employment through Workday’s platform across all submitted applications.

Plaintiff alleges Workday’s algorithmic decision-making tools discriminate against job applicants who are African-American, over the age of 40, and/or are disabled.  Id. at 3.  In support of these allegations, Plaintiff claims that in one instance, he applied for a position at 12:55 a.m. and his application was rejected less than an hour later.  Plaintiff brought claims under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (“Section 1981”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the ADA Amendments Act of 2008 (“ADA”), for intentional discrimination on the basis of race and age, and disparate impact discrimination on the basis of race, age, and disability. Plaintiff also brings a claim for aiding and abetting race, disability, and age discrimination against Workday under California’s Fair Employment and Housing Act (“FEHA”).  Workday moved to dismiss, where Plaintiff’s opposition was supported by an amicus brief filed by the EEOC.

The Court’s Decision

The Court granted in part and denied in part Workday’s motion to dismiss.  At the outset of its opinion, the Court noted that Plaintiff alleged Workday was liable for employment discrimination, under Title VII, the ADEA, and the ADA, on three theories: as an (1) employment agency; (2) agent of employers; and (3) an indirect employer. Id. at 5.

The Court opined that relevant statute prohibits discrimination “not just by employers but also by agents of those employers,” so an employer cannot “escape liability for discrimination by delegating [] traditional functions, like hiring, to a third party.”  Id.  Therefore, an employer’s agent can be independently liable when the employer has delegated to the agent “functions [that] are traditionally exercised by the employer.”  Id.

In regards to the “employment agency” theory, the Court reasoned employment agencies “procure employees for an employer” – meaning – “they find candidates for an employer’s position; they do not actually employ those employees.”  Id. at 7.  The Court further reasoned employment agencies are liable when they “fail or refuse to refer” individuals for consideration by employers on prohibited bases.  Id. The Court held Plaintiff did not sufficiently allege Workday finds employees for employers such that Workday is an employment agency.  Accordingly, the Court granted Workday’s motion to dismiss with respect to the anti-discrimination statutes based on an employment agency theory, without leave to amend.

In addition, the Court held that Workday may be liable on an agency theory, as Plaintiff plausibly alleged Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.  Id.  The Court determined if it reasoned otherwise, and accepted Workday’s arguments, then companies would “escape liability for hiring decisions by saying that function has been handed to over to someone else (or here, artificial intelligence).”  Id. at 8.  The Court determined Plaintiff’s allegations that Workday’s decision-making tools “make hiring decisions” as it’s software can “automatically disposition[] or move[] candidates forward in the recruiting process” were plausible.  Id. at 9.

The Court opined that given Workday’s allegedly “crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”  Id.  In regards to artificial intelligence, the Court noted “Workday’s role in the hiring process was no less significant because it allegedly happens through artificial intelligence,” and the Court declined to “draw[] an artificial distinction between software decision-makers and human decision-makers,” [sic] as any distinction would “gut anti-discrimination laws in the modern era.”  Id. at 10.

Accordingly, the Court denied Workday’s motion to dismiss Plaintiff’s federal discrimination claims.

Disparate Impact Claims

The Court next denied Workday’s motion to dismiss Plaintiff’s disparate impact discrimination claims as Plaintiff adequately alleged all elements of a prima facie case for disparate impact.

First, Plaintiff’s amended complaint asserted that Workday’s use of algorithmic decision-making tools to screen applicants including training data from personality tests had a disparate impact on job-seekers in certain protected categories.  Second, the Court similarly found disparate treatment present and recognized Plaintiff’s assertions were not typical.  “Unlike a typical employment discrimination case where the dispute centers on the plaintiff’s application to a single job, [Plaintiff] has applied to and been rejected from over 100 jobs for which he was allegedly qualified.”  Id. at 14.  The Court reasoned the “common denominator” for these positions was Workday and the platform Workday provided to companies for application intake and screening.  Id.

The Court held “[t]he zero percent success rate at passing Workday’s initial screening” combined with Plaintiff’s allegations of bias in Workday’s training data and tools plausibly supported an inference that Workday’s algorithmic tools disproportionately rejects applicants based on factors other than qualifications, such as a candidate’s race, age, or disability.  Id. at 15.  The Court therefore denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA.  Id. at 16.

Intentional Discrimination Claims

The Court granted Workday’s motion to dismiss Plaintiff’s claims that Workday intentionally discriminated against him based on race and age.  Id.  The Court found that Plaintiff sufficiently alleged he was qualified through his various degrees and qualifications and areas of expertise, supported by his work experience.  However, the Court found Plaintiff’s allegations that Workday intended its screening tools to be discriminatory as “Workday [was] aware of the discriminatory effects of its applicant screening tools” was not enough to satisfy his pleading burden.  Id. at 18.  Accordingly, the Court granted Workday’s motion to dismiss Plaintiff’s intentional discrimination claims under Title VII, the ADEA, and § 1981, without leave to amend, but left open the door for Plaintiff to amend if a discriminatory intention is revealed during future discovery.  Id.   Finally, the Court granted Workday’s motion to dismiss Plaintiff’s California’s Fair Employment and Housing Act with leave to amend.

Implications For Employers

The Court’s resolution of employer liability for software vendors that provide AI-screening tools for employers centered on whether those tools were involved in “traditional employment decisions.”  Here, the Court held that Plaintiff sufficiently alleged that Workday was an agent for employers since it made employment decisions in the screening process through the use of artificial intelligence.

This decision likely will be used as a roadmap for the plaintiffs’ bar to bring discrimination claims against third-party vendors involved in the employment decision process, especially those using algorithmic software to make those decisions. Companies should also take heed, especially given the EEOC’s prior guidance that suggests employers should be auditing their vendors for the impact of their use of artificial intelligence.

California Federal Court Refuses To Dismiss Wiretapping Class Action Involving Company’s Use Of Third-Party AI Software

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan Norimoto

Duane Morris Takeaways:  On July 5, 2024, in Jones, et al. v. Peloton Interactive, Inc., No. 23-CV-1082, 2024 WL 3315989 (S.D. Cal. July 5, 2024), Judge M. James Lorenz of the U.S. District Court for the Southern District of California denied a motion to dismiss a class action complaint alleging that a company’s use of a third party AI-powered chat feature embedded in the company’s website aided and abetted an interception in violation of the California Invasion of Privacy Act (CIPA).  Judge Lorenz was unpersuaded by the company’s arguments that the third-party functioned as an extension of the company rather than as a third-party eavesdropper.  Instead, the Court found that the complaint had sufficient facts to plausibly allege that the third party used the chats to improve its own AI algorithm and thus was more akin to a third-party eavesdropper for which the company could be held liable for aiding and abetting wiretapping under the CIPA.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that third-party AI-powered software embedded in defendants’ websites or other processes and technologies captured plaintiffs’ information and sent it to the third party.  A common claim raised in these cases is a claim under federal or state wiretap acts and seeking hundreds of millions or billions of dollars in statutory damages.  No wiretap claim can succeed, however, where the plaintiff has consented to the embedded technology’s receipt of their communications.  See, e.g., Smith v. Facebook, Inc., 262 F. Supp. 3d 943, 955 (N.D. Cal. 2017) (dismissing CIPA claim involving embedded Meta Pixel technology because plaintiffs consented to alleged interceptions by Meta via their Facebook user agreements).

In Jones, Plaintiffs brought suit against an exercise equipment and media company.  According to Plaintiffs, the defendant company used third-party software embedded in its website’s chat feature.  Id. at *1.  Plaintiffs further alleged that the software routed the communications directly to the third party without Plaintiffs’ consent, thereby allowing the third party to use the content of the communications to “to improve the technological function and capabilities of its proprietary, patented artificial intelligence software.”  Id. at **1, 4.

Based on these allegations, Plaintiffs alleged a claim for aiding and abetting an unlawful interception and use of the intercepted information under California’s wiretapping statute, CIPA § 631.  Id. at *2.  Although Plaintiffs did not allege any actual damages, see ECF No. 1, the statutory damages they sought totaled at least $1 billion.  See id. ¶ 33 (alleging hundreds of thousands of class members); Cal. Penal Code. § 637.2 (setting forth statutory damages of $5,000 per violation).  The company moved to dismiss under Rule 12(b)(6), arguing that the “party exception” to CIPA applied because the third-party software “functions as an extension of [the company] rather than as a third-party eavesdropper.”  2024 WL 3315989, at *2.

The Court’s Opinion

The Court denied the company’s motion and allowed Plaintiffs’ CIPA claim to proceed to discovery.

The CIPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication.  Id. at *2.  To answer the question for purposes of CIPA’s party exception of whether the embedded chat software provider was more akin to a party or a third-party eavesdropper, the Court found that courts look to the “technical context of the case.”  Id. at *3.  As the Court explained, a software provider can be held liable as a third party under CIPA if that entity listens in on a consensual conversation where the entity “uses the collected data for its own commercial purposes.”  Id.  By contrast, the Court further explained, if the software provider merely collects, refines, and relays the information obtained on the company website back to the company “in aid of [defendant’s] business” then it functions as a tool and not as a third party.  Id.

Guided by this framework, the Court found sufficient allegations that the software provider used the chats collected on the company’s website for its own purposes of improving its AI-driven algorithm.  Id. at *4.  Therefore, according to the Court, the complaint sufficiently alleged that the software provider was “more than a mere ‘extension’” of the company, such that CIPA’s party exemption did not apply and Plaintiffs sufficiently stated a claim for the company’s aiding and abetting of the software provider’s wiretap violation.  Id.

Implications For Companies

The Court’s opinion serves as a cautionary tale for companies using third-party AI-powered processes and technologies that collect customer communications and information.  As the ruling shows, litigation risk associated with companies’ use of third-party AI-powered algorithms is not limited to complaints alleging damaging outcomes such as discriminatory impacts, such as plaintiffs alleged in Louis v. Saferent Sols., LLC, 685 F. Supp. 3d 19, 41 (D. Mass. 2023) (denying motion to dismiss claim under Fair Housing Act against landlord in conjunction with landlord’s use of algorithm used to calculate risk of leasing a property to a particular tenant).  In addition, companies face the risk of high-stakes claims for statutory damages under wiretap statutes associated with companies’ use of third-party AI-powered algorithms embedded in their websites, even if the third party’s only use of the algorithm is to improve the algorithm and even if no actual damages are alleged.

As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, organizations should consider in light of Jones whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI in additional detail.  Doing so could deter or help defend a future AI class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ wiretap acts and consumer fraud acts, and seeking multimillion-dollar and billion-dollar statutory damages.

© 2009-2025 Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress