Seventh Circuit Holds BIPA Amendment Applies Retroactively, Reversing Three Illinois Federal Court Decisions

By Gerald L. Maatman, Jr., Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On April 1, 2026, in Clay et al. v. Union Pacific Railroad Co. et al., Nos. 25-2185 et al., 2026 WL 891902 (7th Cir. Apr. 1, 2026),  a three-judge panel of the U.S. Court of Appeals for the Seventh Circuit reversed three federal district court decisions and held that the August 2, 2024, amendment to Section 20 of the Illinois Biometric Information Privacy Act (“BIPA”) applies retroactively to cases pending at the time of enactment. The Seventh Circuit concluded that the amendment is remedial because it governs damages rather than liability and, therefore, applies retroactively under Illinois law.

This decision is a watershed win for BIPA defendants in the class action space. It significantly curtails potential exposure by confirming that plaintiffs may recover, at most, $5,000 in statutory damages for intentional violations or $1,000 for negligent violations per person, rather than on a per-scan basis that previously threatened astronomical liability.

Background

As the Seventh Circuit observed, “BIPA has become a font of high-stakes litigation.” Id. at *1.  In response to the Illinois Supreme Court’s decision in Cothron v. White Castle Sys., Inc., 216 N.E.3d 918, 926 (Ill. 2023), which held that BIPA claims accrue “with every scan or transmission” of biometric information, the Illinois General Assembly amended Section 20 of the BIPA in August 2024 to clarify the scope of recoverable damages. The amendment provides, in relevant part, that a private entity that collects or discloses “the same biometric identifier or biometric information from the same person using the same method of collection…has committed a single violation…for which the aggrieved person is entitled to, at most, one recovery under this Section.” 740 ILCS 14/20(b) (emphasis added).

The consolidated appeals arose from three cases asserting typical BIPA theories. Plaintiff Reginald Clay alleged that Union Pacific violated Section 15(b) by requiring repeated fingerprint scans to access the company’s facilities. Plaintiffs John Gregg and Brandon Willis alleged that their employers used biometric timekeeping systems in violation of Sections 15(a), (b), and (d).

The Seventh Circuit emphasized the extraordinary financial stakes. Plaintiff Clay alleged approximately 1,500 fingerprint scans – translating to $7.5 million in potential damages for a single plaintiff if damages were calculated on a per-scan basis.  2026 WL 891902 at *2.  In contrast, the putative class claims in Plaintiff Willis’ case exposed the defendant to billions of dollars in potential liability. Id.  The three interlocutory appeals posed the same legal question: whether the 2024 amendment to BIPA Section 20 applies retroactively to limit such exposure.

The Seventh Circuit’s Decision

The Seventh Circuit answered that question with a definitive “yes.” It held that the amendment to Section 20 applies retroactively to pending cases. Id. at *3. 

Applying Illinois retroactivity principles, the Seventh Circuit explained that where the legislature is silent on the temporal reach of the amendment, as here, courts look to Section 4 of the Illinois Statute on Statutes, which, in turn, directs the court to determine whether the amendment is substantive or procedural. Id. (citing Perry v. Dep’t of Fin. & Pro. Regul., 106 N.E.3d 1016, 1026-27 (Ill. 2018)). 

The Seventh Circuit concluded that the amendment is remedial and, therefore, procedural, because it governs damages rather than underlying liability. Id. at *4.  Central to this determination was the statutory text and structure. The legislature amended Section 20, which addresses liquidated damages, rather than Section 15, which sets forth the substantive requirements governing the collection and disclosure of biometric data.  The Seventh Circuit emphasized that the amendment does not alter “the rights, duties, and obligations of persons to one another,” which are the defining characteristics of substantive changes. Id. (citing Perry, 106 N.E.3d at 1034). Instead, the amendment focuses exclusively on the remedies available once a violation has been established.

The appellees argued that the Illinois Supreme Court’s decision in Cothron established that each biometric scan constitutes a separate “violation,” and that the amendment therefore effected a substantive change by transforming thousands of violations into a single recoverable event, thus “terminating millions of dollars of liability.” Id. at *4. The Seventh Circuit rejected this position, reasoning that it both misinterprets the statute and overstates Cothron’s holding. Id. at *5. The Court clarified that Cothron addressed only when claims accrue under Section 15 and did not consider the meaning of “violation” for purposes of damages under Section 20. Id.  According to the Seventh Circuit, that distinction was dispositive. Id.

Ultimately, the Seventh Circuit determined that the amendment does not alter the number of violations or the injuries alleged by plaintiffs but instead limits the damages that may be awarded for those violations.  As the Seventh Circuit explained, the amendment “simply changed the statutory award of damages available to plaintiffs, cabining the discretion of trial court judges when they fashion the remedy.” Id. at *6.  Accordingly, the Court held that the amendment is remedial in nature and applies retroactively. Id. at *7. It therefore reversed the district court decisions that had concluded otherwise. Id.

Implications for Companies

Clay is one of the most consequential BIPA defense rulings in years. It materially reshapes the litigation landscape in several key respects:

  • Caps on exposure: The decision eliminates the “per-scan” damages theory asserted by plaintiffs that drove outsized settlement pressure and bet-the-company risk.
  • Immediate impact on pending cases: Defendants in ongoing litigation now have strong grounds to limit damages and revisit class certification, settlement posture, and jurisdictional arguments.
  • Strategic leverage: The ruling provides powerful leverage in motion practice and settlement negotiations, particularly where plaintiffs previously relied on inflated damages models.
  • Deterrence of new filings: By significantly reducing potential recoveries, Clay may dampen the volume of new BIPA filings and recalibrate plaintiffs’ bar incentives.

In sum, Clay delivers a decisive, defense-friendly interpretation of BIPA’s damages framework. Companies facing biometric privacy claims should promptly assess how this ruling affects their litigation strategy and potential exposure.

AbbVie Defeats Genetic Privacy Class Action Because Request For Plaintiff’s Family Medical History Was Not A “Condition Of Employment”

By Gerald L. Maatman, Jr., Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  In Henry v. AbbVie, Inc., No. 23-CV-16830 (N.D. Ill. Mar. 20, 2026), Judge Manish S. Shah of the U.S. District Court for the Northern District of Illinois granted defendant’s motion for summary judgment and dismissed a claim brought under the Illinois Genetic Information Privacy Act (“GIPA”). In his ruling, Judge Shah determined that the alleged request for plaintiff’s family medical history (which history Plaintiff did not provide) during his pre-employment medical screening was not a “condition of employment.” The decision is welcome news for employers that ask employees to undergo medical exams. The ruling indicates that an employer does not necessarily request genetic information “as a condition of employment” by requiring an employee to undergo a medical exam (even if an employee is asked to disclose genetic information during the exam).

Background

Plaintiff Daniel Henry was assigned to work for Defendant AbbVie, Inc., a biopharmaceutical company. During the onboarding process, Plaintiff was required to undergo a “medical surveillance,” which included “questionnaires, blood work, and a brief physical exam.” Henry v. AbbVie, Inc., 2026 WL 788630, at *2 (N.D. Ill. Mar. 20, 2026).

AbbVie used Premise Health, a third-party healthcare provider, to conduct Plaintiff’s medical screening. During the screening, Premise Health nurses asked Plaintiff to complete a written questionnaire and to undergo a physical examination. “Section U” of the questionnaire asked for Plaintiff’s genetic information (specifically, his family medical history), though Plaintiff did not complete that part of the form. Plaintiff claimed that nurses also verbally asked for his family medical history during the physical exam. After the exam, Plaintiff worked at an AbbVie facility in Illinois for four months.

Plaintiff subsequently sued AbbVie under the GIPA, alleging that the company violated Section 25(c)(1) of the statute by “solicit[ing], request[ing], [or] requir[ing] . . . genetic information of a person or a family member of the person . . . as a condition of employment [or] preemployment application.”  410 ILCS 513/25(c)(1).

AbbVie first responded to Plaintiff’s Complaint by moving to dismiss under Federal Rule of Civil Procedure 12(b)(6). Judge Shah denied AbbVie’s motion to dismiss after determining that the family medical history information sought during the medical screening constituted “genetic information” under the GIPA. See Henry v. AbbVie, Inc., 2024 WL 4278070, at *5-6 (N.D. Ill. Sept. 24, 2024).

AbbVie later moved for summary judgment, arguing that: (1) AbbVie did not request Plaintiff’s genetic information because third-party Premise Health (not AbbVie) conducted the screening; (2) even if AbbVie requested Plaintiff’s genetic information, the request was inadvertent because the medical questionnaire instructed Plaintiff to not disclose genetic information; and (3) AbbVie did not condition Plaintiff’s work status or assignment on any request for his genetic information.

The Court’s Decision

The Court granted AbbVie’s motion for summary judgment. While the Court was not persuaded by AbbVie’s first two arguments, it concluded that AbbVie’s third argument warranted dismissal of Plaintiff’s GIPA claim.

Request for Genetic Information

The Court first considered whether AbbVie can be characterized as having requested Plaintiff’s family medical history despite third-party Premise Health having conducted the medical screening. In answering in the affirmative, the Court relied on the GIPA’s incorporation of certain protections found in the federal Genetic Information Nondiscrimination Act (“GINA”). See 410 ILCS 513/25(a) (“An employer … shall treat genetic testing and genetic information in such a manner that is consistent with the requirements of federal law, including but not limited to [GINA].”). The Court cited a regulation promulgated under GINA providing that an employer that requires employees or applicants to undergo medical examinations “must tell health care providers not to collect genetic information, including family medical history, as part of a medical examination intended to determine the ability to perform a job.” 29 C.F.R. § 1635.8(d). Based on this federal regulation, the Court concluded that AbbVie “[n]ot telling Premise Health to elicit genetic information is not enough; the [GIPA] requires an affirmative instruction not to elicit it.” Henry, 2026 WL 788630, at *5.

Inadvertent Disclosure

AbbVie’s second argument turned on the GIPA’s “inadvertent exception,” which states that “inadvertently requesting family medical history by an employer … does not violate this Act.” 410 ILCS 513/25(g). The Court observed that AbbVie’s health questionnaire advised Plaintiff to “not provide any genetic information, including family medical history.” Henry, 2026 WL 788630, at *6 (citation omitted). Thus, the Court held that the inadvertent exception barred Plaintiff’s claim to the extent it was premised on the written questionnaire. See id. (“The disclaimer on AbbVie’s form was enough to make any disclosure on the form inadvertent.”). But the Court determined that the exception did not necessarily bar Plaintiff’s claim to the extent it was premised on nurses orally asking for his family medical history. See id. (“[T]he written disclaimer in the form does not necessarily mean that [Plaintiff] knew that he should not disclose genetic information in response to verbal questions during his physical exam.”) (emphasis added).

Request as a Condition of Employment

Finally, the Court turned to AbbVie’s argument that Plaintiff’s claim failed because any request for his family medical history was not a condition of his employment. See 410 ILCS 513/25(c)(1) (an employer may not “solicit, request, [or] require … genetic information of a person or a family member of the person … as a condition of employment [or] preemployment application”) (emphasis added). The Court agreed with AbbVie and granted the company’s motion for summary judgment on this basis, holding that no genuine issue of material fact existed regarding AbbVie’s request for Plaintiff’s family medical history not having been a condition of his employment. The Court further noted that “the request for genetic information on the written questionnaire was not a condition of [Plaintiff’s] employment, for the simple fact that [Plaintiff] did not fill out that section and it did not affect his employment with AbbVie.” Henry, 2026 WL 788630, at *6.

Moreover, the Court concluded that even if Plaintiff was required to undergo a medical exam to be eligible to work at AbbVie, that did not mean that the verbal request for his family medical history (made during the exam) was a condition of his employment. See id. at *7. The Court thus recognized an important distinction between (i) AbbVie requiring Plaintiff to undergo a medical screening as a condition of employment and (ii) AbbVie specifically requesting Plaintiff’s family medical history as a condition of employment. See id. (“[T]hat [Plaintiff] could not decline to complete his medical surveillance does not create a genuine dispute over whether the verbal request during his exam was a condition of his employment. The undisputed evidence is that a contractor could decline parts of the surveillance and still have the surveillance considered completed.”). Accordingly, because AbbVie did not condition Plaintiff’s employment on a request for his genetic information, the Court granted summary judgment in the company’s favor.

Takeaways For Companies

As noted in a prior blog post, recent decisions suggest that courts may be hesitant to dismiss GIPA claims (especially at the pleading stage). Given the GIPA statute’s strict penalty provision – under which statutory damages can quickly become significant ($2,500 per negligent violation and $15,000 per intentional or reckless violation, see 410 ILCS 513/40(a)(1)-(2)) – we have advised employers to ensure they comply with the statute regarding any health screenings they ask applicants or employees to complete (including by explicitly advising applicants and employees not to disclose their family medical histories during the screenings).

In this plaintiff-friendly litigation landscape, the Henry decision comes as welcome news for GIPA defendants and companies that have employees undergo medical screenings. Importantly, Henry suggests that an employer does not necessarily violate the GIPA by requesting an employee’s genetic information “as a condition of employment” by merely directing her to undergo a medical exam (during which the employee may or may not be asked to provide her family medical history).

Massachusetts Federal Court Dismisses Adtech ECPA Class Action For Failure To Allege Defendants Purposefully Committed A Criminal Act, Furthering Split Of Authority

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: On March 6, 2026, in Progin v. UMass Memorial Health Care, Inc., No. 25-CV-40003, 2026 U.S. Dist. LEXIS 46522 (D. Mass. Mar. 6, 2026), Judge Allison D. Burroughs of the U.S. District Court for the District of Massachusetts granted a motion to dismiss a class action complaint brought by website users against Massachusetts health care and hospital entities. Plaintiffs alleged that the defendants’ use of website advertising technology (“adtech”) violated the federal Wiretap Act, also known as the Electronic Communications Privacy Act (“ECPA”).  Following another similar ruling in the same court,  see Goulart v. Cape Cod Healthcare, Inc., 2025 U.S. Dist. LEXIS 119435 (D. Mass. June 24, 2025),  the decision is significant because it reflects the Massachusetts Federal court’s alignment with other federal courts (including the U.S. District Court for the Southern District of Texas, as we blogged about here) that have interpreted the ECPA in a defense-friendly manner. In contrast, courts in other jurisdictions (including Illinois Federal courts, as we blogged about here) have adopted more plaintiff-friendly interpretations, further deepening the emerging split of authority in adtech privacy litigation.

Background

Progin is one of a legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in websites secretly captured plaintiffs’ web-browsing data and transmitted that data to Meta, Google, and other online advertising agencies and data analytics companies.

In these adtech and similar internet-based technology class actions, plaintiffs frequently rely on the ECPA’s statutory damages provision. Their theory is simple: multiply the number of website visitors – potentially hundreds of thousands – by $10,000 in statutory damages per claimant to produce enormous potential exposure. Although plaintiffs have filed a majority of these lawsuits to date against healthcare providers, they have filed suits against companies that span nearly every industry including education, retailers, and consumer products. Some of these cases have resulted in multimillion-dollar settlements, while others have been dismissed at the pleading stage (as we blogged about here) or the summary judgment stage (as we blogged about here), and the vast majority remain undecided.

In Progin, the plaintiffs sued a group of health care and hospital entities, seeking to represent a class of patients whose personal health information was allegedly disclosed by the Meta Pixel installed on defendants’ websites. The plaintiffs claimed that these alleged transmissions constituted an “interception” by defendants in violation of the ECPA.

Under the ECPA, a “party to the communication” generally cannot be sued unless it intercepted the communication “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This provision is commonly referred to as the “crime-tort exception.”

Plaintiffs argued that alleged violations of the Health Insurance Portability and Accountability Act (HIPAA) served as the predicate crime to trigger this exception. Specifically, plaintiffs argued that defendants were liable under the crime-tort exception because they intercepted and disclosed plaintiffs’ communications and personal information to third parties without consent in violation of HIPAA. 2026 U.S. Dist. LEXIS 46522, at *11.

The defendants moved to dismiss, arguing that the crime-tort exception did not apply because they did not install the Meta Pixel “for the distinct purpose of violating HIPAA or perpetrating a tort.” Id. at *11-12.

The Court’s Decision

The Court agreed with defendants and granted their motion to dismiss, holding that the amended complaint’s allegations “do not support the inference that Defendants purposefully committed the ‘criminal and tortious acts’ specified by Plaintiffs.” Id. at *13-14.

As the Court explained, based on the alleged predicate acts, plaintiffs were required to plausibly allege that defendants “purposefully used or caused to be used” plaintiffs’ unique health identifiers without authorization; “purposefully disclosed” plaintiffs’ individually identifiable health information to Facebook or Google without authorization; or “purposefully invaded” plaintiffs’ privacy.  Id. at *12-13.

Importantly, the Court emphasized that merely alleging that defendants knowingly committed such acts is insufficient because “‘purpose’ is an essential element of ECPA, distinct from the minimal intent [of knowingness] required under HIPAA.” Id. at *13 (quoting Doe v. Lawrence Gen. Hosp., 2025 U.S. Dist. LEXIS 195964, at *32 (D. Mass. Aug. 29, 2025)). The Court further explained that “[i]t is not enough that a crime or tort [may have been] a . . . side effect of the interception.” Id. at *14 (quoting Doe, 2025 U.S. Dist. LEXIS 195964, at *30).

Implications For Companies

The decision in Progin is a big win for healthcare providers and other defendants facing adtech class actions. This ruling reinforces a critical principle in ECPA and other privacy-based litigation: the defendants’ state of mind matters.

Under the ECPA’s HIPAA-based crime-tort exception, as well as under similar privacy statutes such as the Video Privacy Protection Act (“VPPA”), liability depends on the defendant’s knowledge and purpose. Where a defendant lacks knowledge that transmitted data is tied to specific individuals, or lacks the purpose to disclose identifiable information, the statutory requirements for liability may not be satisfied.

Accordingly, Progin provides strong authority for defendants to argue that routine adtech data transmissions cannot satisfy the purposeful intent requirements of the ECPA’s HIPAA-based crime-tort exception or similarly worded privacy statutes – a position that may prove critical as courts continue to confront the growing wave of adtech privacy class actions.

Illinois Federal Court Denies Certification Of Deceptive Advertising Class Where Named Plaintiff Knew The Truth But Continued Purchasing The Product

By Gerald L. Maatman, Jr., Jennifer A. Riley, and Hayley Ryan

Duane Morris Takeaways:  On February 20, 2026, in Clark v. Blue Diamond Growers, Case No. 22-CV-01591, 2026 WL 483275 (N.D. Ill. Feb. 20, 2026), Judge Jorge L. Alonso of the U.S. District for the Northern District of Illinois denied class certification in a deceptive advertising lawsuit brought under the Illinois Consumer Fraud and Deceptive Business Practices Act (“ICFA”). The Court concluded that the named plaintiff was not an adequate class representative because she knew the allegedly misleading representation was false yet continued purchasing the product.  Because that knowledge defeated proximate causation and created a unique defense, the Court determined that class certification was improper.

This decision is a reminder that plaintiffs asserting deceptive advertising claims must show they were actually deceived.  Where a named plaintiff knew the truth and continued to buy the product anyway, adequacy under Rule 23(a)(4) is vulnerable.

Background

Plaintiff Margo Clark filed a putative class action complaint against Blue Diamond Growers, a cooperative of California almond growers that sells flavored almonds, including “Smokehouse® Almonds.” Id. at *1. She alleged that the “Smokehouse®” label misled consumers into believing the almonds were smoked in a smokehouse, when in fact the smoky flavor derived from added seasoning. Id. According to Plaintiff’s Complaint, this purported misrepresentation enabled Blue Diamond to charge a price premium in violation of the ICFA. Id.

Plaintiff moved to certify a class of Illinois purchasers of Smokehouse® Almonds from March 2019 to the present. Id.

The Court’s Ruling

Judge Alonso denied certification based on a failure to establish adequacy of representation. Id. at *2. Under Federal Rule of Civil Procedure 23(a)(4), a class may be certified only if “the representative parties will fairly and adequately protect the interests of the class.” Where the named plaintiff is subject to an arguable unique defense, however, adequacy is lacking. Id. at *1. 

Here, the dispositive issue was proximate causation under the ICFA. To prevail on a deceptive advertising claim under the ICFA, a plaintiff must establish that the alleged deception proximately caused her injury, i.e., that she was actually deceived. Id. at *2. A plaintiff who knows the truth cannot establish proximate cause because she was not misled. Id.

At her deposition, Plaintiff testified that she learned as early as 2019 or 2020, after viewing a Facebook advertisement from her counsel, that the almonds were seasoned rather than smoked. Id. Despite that knowledge, she continued to purchase the product for over a year. Id.  The Court found this testimony fatal, holding that Plaintiff was “inadequate to serve as the class representative because she cannot show proximate causation as required to prevail on her claim.” Id.

Plaintiff’s counsel attempted to rehabilitate the claim through a declaration asserting that the Facebook advertisements were not targeted to Illinois consumers in 2019 or 2020. Id. However, counsel also acknowledged in the same declaration that Plaintiff submitted her information in response to the advertisement approximately one year before signing her representation agreement in March 2022.  Id. The Court concluded that this timeline did not resolve the proximate cause problem. Even accepting counsel’s version, Plaintiff “saw the advertisement around March 2021, yet she still continued to purchase almonds for another year.” Id.

Plaintiff’s counsel also relied on Plaintiff’s amended interrogatory responses in which she claimed she first learned the almonds were not smoked during a conversation with her attorney after signing the representation agreement. Id. at *3. Based on that revision, Plaintiff’s counsel argued that Plaintiff could establish proximate causation because she stopped purchasing the almonds after she signed the representation agreement. Id.

The Court was unpersuaded. Weighing the deposition testimony, the declaration, and Plaintiff’s original interrogatory responses, the Court concluded that Blue Diamond’s proximate cause defense was at least arguable – and that was sufficient. Id. The Court emphasized that a unique defense need only be “arguable” to defeat adequacy, and here it was “certainly arguable.” Id.

Accordingly, the Court denied certification and directed the parties to submit a joint status report addressing how they intend to proceed on Plaintiff’s individual claims and whether they have considered settlement discussions in light of the Court’s certification ruling. Id.

Implications for Companies

Clark reinforces a core Rule 23 principle that a named plaintiff subject to a unique defense cannot adequately represent a class. In deceptive advertising cases under the ICFA and similar statutes, knowledge is often outcome-determinative. If a plaintiff knew of the alleged defect before purchasing, or continued purchasing after learning the truth, proximate causation becomes vulnerable.

For companies defending consumer fraud class actions, deposition testimony, purchase history, and discovery into when and how the plaintiff allegedly learned of the “defect” or deception may provide a powerful adequacy challenge. As Clark illustrates, even an “arguable” unique defense can be enough to defeat class certification.

Illinois State Court Grants Certification Of BIPA Class Comprised Of Customers Who Used Apple’s Siri Function

By Gerald L. Maatman, Jr., Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways:  In Zaluda et al. v. Apple, Inc., Case No. 2019 CH 11771, (Cir. Ct. Cook Cnty., Ill. Jan. 29, 2026), Judge Michael T. Mullen of the Circuit Court of Cook County, Illinois granted class certification to a class of plaintiffs alleging that Apple’s Siri function violated the Illinois Biometric Information Privacy Act (“BIPA”).  In doing so, Judge Mullen delivered a significant setback to Apple’s efforts to block the certification of a purported class that could number in the millions.  Pre-certification discovery established that there were approximately 2.6 to 3.9 million Siri users in Illinois during the relevant class period.

This decision represents the latest success for the plaintiffs’ bar in a string of victories in Illinois privacy class actions (as we previously blogged about here and here) and underscores that even the largest and most sophisticated companies in the world face substantial legal exposure arising from their biometric data collection, retention, and use practices.

Background

Apple’s voice-activated digital assistant, “Siri,” uses speech recognition technology to understand and respond to user inquiries and to perform user-requested tasks. Siri comes pre-loaded on a wide range of Apple devices, including iPhones, iPads, HomePods, Apple Watches, Macbooks, iMacs, and AirPods.

Siri relies on an automatic speech recognition (“ASR”) process that “automatically and uniformly computes biometric feature vectors [] from every user utterance for every Siri user,” and that process functions uniformly across all Apple devices. Id. at 4. These “feature vectors” are capable of being used to identify a speaker. Id. at 3. During the relevant class period, Apple’s privacy policies and disclosures applicable to Siri users were uniform and did not include the notice, consent, or retention policy disclosures required by the BIPA. Id. at 4.

Apple sorts its records to identify device users based on their state of residence or telephone number area code. Id. at 5. Apple’s former Senior Director of Siri testified at his deposition that Apple tracks the percentage of device owners who enable Siri and that approximately 20% to 30% of all device owners do so. Based on those figures, Apple estimated that there were approximately 2.6 to 3.9 million Siri users in Illinois during the relevant period at issue in the lawsuit. Id. at 3, 5.

Against this backdrop, plaintiffs filed a class action lawsuit alleging that Apple violated the BIPA by collecting, capturing, storing and/or disseminating “biometric feature vectors” and/or “voiceprints” of millions of Illinois residents who used Siri on any Apple device without first providing the required disclosures, obtaining informed written consent, or maintaining publicly available written data retention and destruction guidelines. Id. at 2. Plaintiffs sought certification of a class consisting of all Illinois residents who used Siri on any Apple device on or after September 19, 2014. Id. at 5. Notably, pre-certification discovery revealed that there were more than 13 million unique Apple IDs associated with a billing address in Illinois and an Apple device capable of running Siri. Id. at 5 n.20.

The Court’s Ruling

In ruling in favor of the plaintiffs, Judge Mullen systematically rejected Apple’s arguments that plaintiffs failed to satisfy the requirements for class certification under 735 ILCS 5/2-801. Given the size of the purported class, Apple stipulated to numerosity for purposes of class certification. Id. at 8.

With respect to the adequacy requirement, Apple argued that the named plaintiffs were inadequate representatives because they lacked sufficient knowledge about the case and because three of them no longer reside in Illinois. Id. at 18. The Court rejected those arguments. After reviewing the named plaintiffs’ deposition testimony, the Court found that each plaintiff demonstrated a basic understanding of the claims and emphasized that class representatives are not “required to be experts.”  Id. The Court further concluded that each named plaintiff was an Illinois resident at some point during the proposed class period and that there was no evidence of any conflict between the interests of any named plaintiff and the interests of absent class members. Id.

The Court also found that common questions of law and fact predominated over any questions affecting individual members, and that a class action was an appropriate method for adjudicating the claims.  Id. at 17, 22. Apple argued that commonality and predominance were lacking because: (1) Siri is optional and not all Apple device users enable it; (2) Siri users do not all activate Siri in precisely the same manner; and (3) Siri’s speech recognition functions changed during the class period. Id. The Court rejected each contention.

First, the Court explained that users who never enabled Siri are not members of the proposed class, rendering that argument irrelevant. Id. Second, the Court concluded that regardless of how Siri is activated, Plaintiffs plausibly alleged that Siri’s ASR process uniformly generates feature vectors that are capable of identifying a speaker from all user utterances. Id. The Court further reasoned that the optional Siri features cited by Apple do not undermine plaintiffs’ claims based on Siri’s ASR process and, at most, could give rise to additional BIPA claims for users who opted in to those features. Id. at 11-12. Third, the Court found that alleged changes to Siri’s speech recognition functions during the class period did not alter the uniform operation of the ASR process and therefore did not defeat commonality or predominance. Id. at 12.

Apple also contended that class membership could only be established through “individualized” proof, which it argued defeated certification. Id. at 14. The Court disagreed. Citing Svoboda v. Amazon.com, Inc., 2024 WL 1363718, *10 (N.D. Ill. Mar. 30, 2024) (which we previously blogged about here), the Court held that issues concerning how class members are identified are matters of class management, not class certification. Id. at 16. The Court explained that, if liability is established, class members could submit affidavits attesting to their Siri use in Illinois, which could then be cross-checked against Apple IDs, home addresses, IP addresses, and geolocation data. Id.

Finally, the Court concluded that proceeding on a class basis was the most efficient and fair method of adjudication. Id. at 22. The Court noted that Apple’s implicit alternative (i.e., requiring millions of individual BIPA lawsuits by Illinois Siri users) would impose a severe burden the judicial system. Id. at 21.

Implications for Companies

This decision serves as a reminder of the significant risks associated with collecting or retaining biometric information without BIPA-compliant policies and practices. As Zaluda illustrates, the larger the company, the larger the potential class size (and the greater exposure to statutory damages). Although the ultimate size of the certified class remains to be determined, it is likely to number in the millions. Companies of all sizes should view this ruling as a wake-up call regarding the substantial liability that can result from noncompliance with Illinois’ biometric privacy laws.

Executive Order Signals A Push Toward A Single, Federal “AI Rulebook” And A Retreat From The State Patchwork

By Gerald L. Maatman, Jr., Justin R. Donoho, and Hayley Ryan

Duane Morris Takeaways:  On December 11, 2025, President Donald J. Trump signed Executive Order 14365 titled “Ensuring a National Policy Framework for Artificial Intelligence.” The Order targets what it characterizes as a “patchwork” of State-by-State AI regulation and directs federal agencies to pursue a more uniform, national framework. Rather than serving as a technical AI governance roadmap, the Order focuses on limiting State AI laws through federal funding leverage, potential preemption, and expanded use of FTC enforcement authority. The discussion below highlights the Order’s core objectives and key implications for companies and employers. The Executive Order is required reading for any organizations deploying AI or thinking of doing so.

The Executive Order’s Core Objectives

Reduce State AI Regulation By Framing It As A Competitiveness Problem

The Order emphasizes U.S. leadership in artificial intelligence and asserts that divergent State regulatory regimes increase compliance costs, especially for startups, and may impede innovation and deployment. It also raises concerns that certain State approaches could pressure companies to embed “ideological” requirements into AI systems.

Create Leverage Through Federal Funding: BEAD Broadband Money As The “Carrot And Stick”

Within 90 days, the Secretary of Commerce is directed to issue a policy notice describing the circumstances under which States may be deemed ineligible for certain broadband deployment funding under the Broadband Equity Access and Deployment (BEAD) program if they impose specified AI-related requirements. The notice is also intended to explain how fragmented State AI laws could undermine broadband deployment and high-speed connectivity goals.

Move Toward A Federal Reporting And Disclosure Standard

Within 90 days after the Order’s State-law “identification” process (discussed below), the Federal Communications Commission (FCC), in consultation with a Special Advisor for AI and Crypto, is instructed to consider whether to initiate a proceeding to adopt a federal reporting and disclosure standard for AI models that would preempt conflicting State requirements.

Use The FTC Act As An Enforcement Anchor And Tee Up Preemption Arguments

Within 90 days, the Federal Trade Commission (FTC) is directed, in consultation with other federal agencies, to issue a policy statement addressing how the FTC Act’s prohibition on unfair or deceptive acts or practices applies to AI models, with the express objective of preempting conflicting State laws.

Establish A Federal AI Litigation Task Force To Challenge State AI Laws

The Executive Order goes beyond policy statements and funding leverage by directing the Attorney General, within 30 days, to establish an AI Litigation Task Force dedicated exclusively to challenging State AI laws that conflict with the Order’s national policy objectives. The Task Force is authorized to pursue constitutional and preemption-based challenges, signaling an intent to bring coordinated, affirmative litigation against State AI regimes.

That enforcement effort is reinforced by a parallel State-law triage process. Within 90 days, the Secretary of Commerce must publish an evaluation identifying “onerous” State AI laws for potential challenge, particularly those that require AI systems to alter truthful outputs or compel disclosures that may implicate First Amendment or other constitutional concerns. Together, these provisions signal an intent to move quickly from policy articulation to test cases aimed at curbing State-level AI regulation.

Implications For Companies

Compliance Strategy May Shift, But Uncertainty Rises First

Although companies may welcome relief from conflicting State AI mandates, the Executive Order is likely to increase near-term uncertainty. Preemption disputes are likely, and the Order directs agency action rather than establishing a comprehensive statutory framework. Companies should avoid scaling back State-law compliance prematurely and should assume any federal override will be contested until resolved through rulemaking and litigation.

Class Action Exposure Will Shift, Not Disappear

Even if State AI laws are narrowed, plaintiffs’ lawyers are likely to pursue claims under more traditional theories, including consumer protection (particularly AI marketing and disclosure claims), employment discrimination, privacy and biometrics statutes, and contract or misrepresentation theories. The Order’s emphasis on FTC unfair and deceptive practices enforcement suggests that federal consumer protection standards may become the new focal point for both regulatory scrutiny and follow-on civil litigation.

Employment Risk Remains

Employers should expect ongoing scrutiny of AI use in hiring, promotion, and performance management, including disparate impact claims, vendor-liability arguments, and discovery disputes over model documentation, adverse impact analyses, and validation. Defensible governance, testing, and documentation remain critical.

Federal Contracting And Funding May Come With New AI Representations

If federal agencies adopt standardized AI disclosures, companies operating in regulated industries or participating in broadband initiatives may face new contract provisions governing AI use, along with enhanced reporting and audit obligations.

What Companies Should Do Now

Companies should begin by identifying where and how AI tools are being deployed, particularly in consumer-facing and employment-related contexts, and evaluating those uses under existing disclosure, privacy, and anti-discrimination laws. Public-facing statements about AI capabilities should be reviewed to ensure they are accurate and defensible, as increased regulatory and litigation focus on unfair or deceptive practices is likely to heighten scrutiny of AI-related claims. Companies should also review vendor relationships to confirm that contracts clearly address testing and validation obligations, incident response, audit rights, and appropriate allocation of risk for privacy and discrimination claims. Finally, organizations should remain prepared for continued regulatory change by maintaining State-law compliance readiness while monitoring federal agency actions that may shape a national AI framework.

Bottom Line

This Executive Order is a significant policy signal. The federal government is positioning itself to reduce State-by-State AI regulation and replace it with a framework centered on federal disclosure requirements and consumer protection enforcement. Companies should view the Order as an opportunity to prepare for a likely federal compliance baseline, without assuming State-law exposure will disappear in the near term.

Illinois Supreme Court Imposes Stricter Standing Test For “No-Injury” Class Actions Premised On Statutory Violations

By Gerald L. Maatman, Jr., Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  In Fausett v. Walgreen Co., 2025 IL 131444 (Nov. 20, 2025), the Illinois Supreme Court narrowly construed the private right of action set forth in the federal Fair Credit Reporting Act (FCRA), holding that because the FCRA does not explicitly authorize consumers to sue for violations, the law does not authorize individual lawsuits unless a consumer shows that a violation caused a concrete injury. Thus, at least for FCRA actions, a plaintiff must now allege a “concrete injury” in Illinois state courts similar to what a plaintiff must allege to establish Article III standing in federal courts. This is a significant development, as Illinois courts have not previously required “concrete-injury” allegations for statutory claims under the state’s more liberal standing test.

Fausett is therefore a must-read opinion that represents an obstacle for future plaintiffs pursuing “no-injury” claims premised on the FCRA, in addition to other federal statutes containing similar private rights of action.

Case Background

Plaintiff alleged that Defendant violated the Fair and Accurate Credit Transactions Act (FACTA) – a provision of the FRCA – by printing a receipt containing more than the last five digits of her debit card number. Plaintiff sought statutory damages for the alleged FACTA violation, though she did not claim the violation led to actual harm by, for example, a third party using the receipt to steal her identity.

Plaintiff moved to certify a class of individuals for whom Defendant printed receipts containing more than the last five digits of their payment card numbers. In granting class certification, the trial court rejected Defendant’s argument that Plaintiff had no viable claim due to lack of standing. The trial court reasoned that Illinois courts are not bound by the same jurisdictional restrictions applicable to federal courts and that the Illinois Supreme Court’s decision in Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186, established that “a violation of one’s rights afforded by a statute is itself sufficient for standing.” Fausett, 2025 IL 3237846, ¶ 15. The Illinois Appellate Court affirmed the trial court’s class certification order, and Defendant subsequently appealed to the Illinois Supreme Court.

The Illinois Supreme Court’s Decision

The issue before the Illinois Supreme Court was whether standing existed in Illinois courts for a plaintiff alleging a FACTA violation that did not result in actual harm.

The Court began by distinguishing the standing doctrines applied in Illinois state courts vs. federal courts. The Court observed that Illinois courts are not bound by federal standing law and that Illinois standing principles apply to all claims pending in state court – even those premised on federal statutes.

The Court then identified the two different types of standing that exist in Illinois courts, including: (1) common-law standing, which – like Article III – requires an injury in fact to a legally recognized interest; and (2) statutory standing, which requires the fulfillment of statutory conditions to sue for legislatively created relief. See id. ¶ 39 (for statutory standing, the legislature creates a right of action and determines “who shall sue, and the conditions under which the suit may be brought”) (citation omitted). The Court further noted that a statutory violation, without actual harm, can establish statutory standing only where the statute specifically authorizes a private lawsuit for violations.

Turning to Plaintiff’s FACTA lawsuit, the Court determined that Plaintiff’s claim could not invoke statutory standing because the FCRA’s liability provisions “fail to include standing language. In other words, Congress did not expressly define the parties who have the right to sue for the statutory damages established in FCRA.” Id. ¶ 40; see also id. ¶ 44 (“the plain and unambiguous language” of the FCRA “does not state the consumer or an aggrieved person may file the cause of action”). Thus, because the FCRA is “silent as to who may bring the cause of action for damages,” Plaintiff’s FACTA claim “does not implicate statutory standing principles, and thus common-law standing applies to plaintiff’s suit.” Id.

As for common law standing, the Court concluded that Plaintiff’s claim did not satisfy Illinois’s common law standing test, under which an alleged injury, “whether actual or threatened, must be: (1) distinct and palpable; (2) fairly traceable to the defendant’s actions; and (3) substantially likely to be prevented or redressed by the grant of the requested relief.” Id. ¶ 39 (quoting Petta v. Christie Business Holdings Co., P.C., 2025 IL 130337, ¶ 18). The injury alleged must also be concrete – meaning that a plaintiff alleging only a purely speculative future injury lacks a sufficient interest to have standing.

The Court held that Plaintiff failed to allege or prove a concrete injury because she conceded that she was unaware of any harm to her credit or identity caused by the alleged FACTA violation, and she could not identify anyone who had even seen her receipts “beyond the cashier, herself, and her attorneys.” See id. ¶ 48. Thus, Plaintiff could only show an increased risk of identity theft – something the Court has found to be insufficient to confer standing for a complaint seeking money damages. Because Plaintiff lacked a viable claim due to lack of standing, the Court held that the trial court abused its discretion in granting Plaintiff’s motion for class certification.

Implications Of The Fausett Decision

Fausett will impact FCRA class actions in a significant manner by precluding plaintiffs from bringing certain “no-injury” class actions in Illinois state courts. Federal courts have regularly dismissed such claims for lack of Article III standing based on the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins, 578 U.S. 330 (2016).

Fausett now forecloses plaintiffs from refiling the same claims in Illinois state courts, leaving plaintiffs without a venue to prosecute no-injury FCRA claims in Illinois. Importantly, the Fausett decision will likely reach beyond the FCRA context, as other federal consumer-protection statutes contain liability provisions with private-right-of-action language similar to the language found in the FCRA.

Third Circuit Affirms Dismissal Of CIPA Adtech Class Action Because A Party To A Communication Cannot Eavesdrop On Itself

By Gerald L. Maatman, Jr., Justin R. Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On November 13, 2025, in Cole, et al. v. Quest Diagnostics, Inc., 2025 U.S. App. LEXIS 29698 (3d Cir. Nov. 13, 2025), the U.S. Court of Appeals for the Third Circuit affirmed a ruling of the U.S. District Court for the District of New Jersey’s in dismissing a class action complaint brought by website users against a diagnostic testing company alleging that the company’s use of website advertising technology violated the California Invasion of Privacy Act (“CIPA”) and California’s Confidentiality of Medical Information Act (“CMIA”). 

The ruling is significant because it confirms two important principles: (1) CIPA’s prohibition against eavesdropping does not apply to an online advertising company, like Facebook, when it directly receives information from the users’ browser; and (2) the CMIA is not triggered unless plaintiffs plausibly allege the disclosure of substantive medical information.

Background

This case is one of a legion of nationwide class actions that plaintiffs have filed alleging that third-party technologies (“adtech”) captured user information for targeted advertising. These tools, such as the Facebook Tracking Pixel, are widely used across millions of consumer products and websites.

In these cases, plaintiffs typically assert claims under federal or state eavesdropping statutes, consumer protection laws, or other privacy statutes. Because statutes like CIPA allow $5,000 in statutory damages per violation, plaintiffs frequently seek millions, or even billions, in potential recovery, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $5,000 per claimant, equals a huge amount of damages. While many of these suits initially targeted healthcare providers, plaintiffs have sued companies across nearly every industry, including retailers, consumer products companies, universities, and the adtech companies themselves.

Several of these cases have resulted in multimillion-dollar settlements; others have been dismissed at the pleading stage (as we blogged about here) or at the summary judgment stage (as we blogged about here and here). Still, most remain undecided, and with some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Cole, the plaintiffs alleged that the defendant diagnostic testing company used the Facebook Tracking Pixel on both its general website and its password-protected patient portal.  Id. at *1-2.  According to the plaintiffs, when a user accessed the general website, the Pixel intercepted and transmitted to Facebook “the URL of the page requested, along with the title of the page, keywords associated with the page, and a description of the page.” Id. at *2-3. Likewise, when a user accessed the password-protected website, the Pixel allegedly transmitted the URL “showing, at a minimum, that a patient has received and is accessing test results.” Id. at *3.

Plaintiffs asserted that these transmissions constituted (1) a CIPA violation because the company supposedly aided Facebook in “intercepting” plaintiffs’ internet communications, and (2) a CMIA violation because the company allegedly disclosed URLs associated with webpages plaintiffs accessed to view test results along with plaintiffs’ identifying information linked to users’ Facebook accounts. Id. at *3.

The company moved to dismiss, and, in separate orders, the district court dismissed both claims. See 2024 U.S. Dist. LEXIS 116350; 2025 U.S. Dist. LEXIS 7205.

As to the CIPA claim, the district court found that CIPA “is aimed only at ‘eavesdropping, or the secret monitoring of conversations by third parties,’” and that Facebook was not a third party because it received information directly from plaintiffs’ browsers about webpages they visited. 2025 U.S. Dist. LEXIS 7205, at *7-8 (quoting In Re Google Inc. Cookie Placement Consumer Privacy Litig., 806 F.3d 125, 140-41 (3d Cir. 2015)).  As to the CMIA claim, the district court found that plaintiffs alleged only that the company disclosed that a patient accessed test results but not what kind of medical test was done or what the results were. 2024 U.S. Dist. LEXIS 116350, at *15. Accordingly, the district court held that plaintiffs failed to allege the disclosure of “substantive” medical information as required under the CMIA. Id.

Plaintiffs appealed both rulings.

The Court’s Decision

The Third Circuit affirmed. Id. at *1.

On the CIPA claim, the Third Circuit explained that “[a]s a recipient of a direct communication from Plaintiffs’ browsers, Facebook was a participant in Plaintiffs’ transmissions such that [the company] did not aid or assist Facebook in eavesdropping on or intercepting such communications, even if done without the users’ knowledge.” 2025 U.S. App. LEXIS 29698, at *6.  With no eavesdropping, “Plaintiffs’ CIPA claim was properly dismissed.” Id. at *7.

On the CMIA claim, the Third Circuit explained that “at most, Plaintiffs alleged that [the company] disclosed Plaintiffs had been its patients, which is not medical information protected by CMIA.” Id. at *8. Thus, the Third Circuit held that the district court properly dismissed the CMIA claim. Id. at *9.

Implications For Companies

Cole offers strong precedent for any company defending adtech class action claims (1) brought under CIPA’s eavesdropping provision where the third-party adtech company directly receives the information from users’ browsers and (2) brought under the CMIA where the alleged disclosure merely shows that a person was a patient, without revealing any substantive information about the person’s medical condition or test results.

The latter point continues to appear across adtech class actions.  Just as the plaintiffs in Cole failed to plausibly allege the disclosure of substantive medical information,  courts have dismissed similar claims where plaintiffs allege disclosure of protected health information (“PHI”) without actually identifying what PHI was supposedly shared (as we blogged about here).  These decisions reinforce that adtech plaintiffs must identify the specific medical information allegedly disclosed to plausibly plead claims under the CMIA or for invasion of privacy.

California Federal Court Dismisses Adtech Class Action For Failure To Specify Highly Offensive Invasion Of Privacy

By Gerald L. Maatman, Jr., Justin R. Donoho, Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  On October 30, 2025, in DellaSalla, et al. v. Samba TV, Inc., 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025), Judge Jacqueline Scott Corley of the U.S. District Court for the Northern District of California dismissed a complaint brought by TV viewers against a TV technology company alleging that the company’s provision of advertising technology in the plaintiffs’ smart TVs committed the common law tort of invasion of privacy and violated the Video Privacy Protection Act (“VPPA”), the California Invasion of Privacy Act (“CIPA”), and California’s Comprehensive Computer Data Access and Fraud Act (“CDAFA”).  The ruling is significant as it shows that in the hundreds of adtech class actions across the nation alleging that adtech violates privacy laws, plaintiffs do not plausibly state a common law claim for invasion of privacy unless they specify in the complaint the information allegedly disclosed and explain how such a disclosure was highly offensive.  The case is also significant in that it shows that the VPPA does not apply to video analytics companies, and that California privacy statutes do not apply extraterritorially to plaintiffs located outside California.

Background

This case is one of a legion of class actions that plaintiffs have filed nationwide alleging that third-party technology captured plaintiffs’ information and used it to facilitate targeted advertising. 

This software, often called advertising technologies or “adtech,” is a common feature of millions of consumer products and websites in operation today.  In adtech class actions, the key issue is often a claim brought under a federal or state wiretap act, a consumer fraud act, or the VPPA, because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including retailers, consumer products, universities, and the adtech companies themselves.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided. 

In DellaSalla, the plaintiffs brought suit against a TV technology company that embedded a chip with analytics software in plaintiffs’ smart TVs.  Id. at *1, 5.  According to the plaintiffs, the company intercepted the plaintiffs’ “private video-viewing data in real time, including what [t]he[y] watched on cable television and streaming services,” and tied this information to each plaintiff’s unique anonymized identifier in order to “facilitate targeted advertising,” all allegedly without the plaintiffs’ consent.  Id. at *1.  Based on these allegations, the plaintiffs claimed that the TV technology company violated the CIPA, CDAFA, and VPPA, and committed the common-law tort of invasion of privacy. 

The company moved to dismiss, arguing that the CIPA and CDAFA did not apply because the plaintiffs were located outside California, that the VPPA did not apply because the TV technology company was not a “video tape service provider,” and that the plaintiffs failed to plausibly allege a highly offensive violation of a privacy interest.

The Court’s Decision

The Court agreed with the TV technology company and dismissed the complaint in its entirety, with leave to amend any existing claims but not to add any additional claims without further leave.

On the CIPA and CDAFA claims, the Court found that the plaintiffs did not allege that any unlawful conduct occurred in California.  Instead, the plaintiffs alleged that the challenged conduct occurred in their home states of North Carolina and Oklahoma.  Id. at *1, 3-4.  For these reasons, the Court dismissed the CIPA and CDAFA claims, finding that these statutes do not apply extraterritorially.  Id.

On the VPPA claim, the Court addressed the VPPA’s definition of  “video tape service provider,” which is “any person, engaged in the business … of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.”  Id. at *5.  The plaintiffs argued that the TV technology company was a video tape service provider “because its technology is incorporated in Smart TVs, which deliver prerecorded videos.  [The defendant] advertises its technology precisely as providing a ‘better viewing experience’ ‘immersive on-screen experiences’ and a ‘more tailored ad experience’ through its technology.”  Id.  The Court rejected this argument. It held that “[t]his allegation does not plausibly support an inference, [the defendant]—an analytics software provider—facilitated the exchange of a video product. Rather, the allegations support an inference [the defendant] collected information about Plaintiffs’ use of a video product, but not that it provided the product itself.”  Id. (emphasis added).

On the common law claim for invasion of privacy, the TV technology company argued that this claim failed because the plaintiffs “have no expectation of privacy in the information it collects and Plaintiffs have not alleged a highly offensive intrusion.”  In examining this argument, the Court noted that Plaintiff had only provided “vague references” to the information supposedly intercepted.  Id. at *4.  This information included video-viewing data generally (none specified) tied to an anonymized identifier.  Id. at *1, 5.  Thus, the Court agreed with the defendant’s argument and found that plaintiffs identified “no embarrassing, invasive, or otherwise private information collected” and no explanation of how the tracking of video viewing history with an anonymized ID caused plaintiffs “to experience any kind of harm that is remotely similar to the ‘highly offensive’ inferences or disclosures that were actionable at common law.”  Id. at *5.  In sum, the Court concluded that “Plaintiffs have not plausibly alleged a highly offensive violation of a privacy interest.”

Implications For Companies

DellaSala provides powerful precedent for any company opposing adtech class action claims (1) brought under statutes enacted in states other than the plaintiffs’ place of residence; (2) brought under the federal VPPA where the company allegedly transmitted video usage information, as opposed to any videos themselves; and (3) alleging common-law invasion of privacy, where the plaintiffs have not specified the information disclosed and why such a disclosure is highly offensive. 

The last point is a recurring theme in adtech class actions.  Just as this plaintiff suing a TV technology company did not plausibly state a common-law claim for invasion of privacy without identifying the videos watched and any highly offensive harm in associating those videos with an anonymized ID, so did a plaintiff not plausibly state a claim for invasion of privacy by way of alleging adtech’s disclosure of protected health information (“PHI”), without specifying the PHI allegedly disclosed (as we blogged about here).  These cases show that for adtech plaintiffs to plausibly plead claims for invasion of privacy, they at least need to identify what allegedly private information was disclosed and explain how the alleged disclosure was highly offensive.

New York Federal Court’s OpenAI Discovery Orders Provide Key Insights For Companies Navigating AI Preservation Standards

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: In a series of discovery rulings in the case of In Re OpenAI, Inc. Copyright Infringement Litigation, No. 23 Civ. 11195 (S.D.N.Y.), Magistrate Judge Ona T. Wang issued a series of orders that signal how courts are likely to approach AI data, privacy, and discovery obligations. Judge Wang’s orders illustrate the growing tension between AI system transparency and data privacy compliance – and how courts are trying to balance them.

For companies that develop or use AI, these rulings highlight both the risk of expansive preservation demands and the opportunity to share proportional, privacy-conscious discovery frameworks. Below is an overview of these decisions and the takeaways for in-house counsel, privacy officers, and litigation teams.

Background

In May 2025, the U.S. District Court for the Southern District of New York issued a preservation order in a copyright action challenging the use of The New York Times’ content to train large language models. The order required OpenAI to preserve and segregate certain output log data that would otherwise be deleted. Days later, the Court denied OpenAI’s motion to reconsider or narrow that directive. By October 2025, however, the Court approved a negotiated modification that terminated OpenAI’s ongoing preservation obligations while requiring continued retention of the already-segregated data.

The Court’s Core Rulings

  1. Forward-Looking Preservation Now, Arguments Later

On May 13, 2025, the Court entered an order requiring OpenAI to preserve and segregate output log data that would otherwise be deleted, including data subject to user deletion requests or statutory erasure rights. See id., ECF No. 551. The rationale: once litigation begins, even transient data can be critical to issues like bias and representativeness. The Court stressed that it was too early to weigh proportionality, so preservation would continue until a fuller record emerged.

  1. Reconsideration Denied, Preservation Continues

A few days later, when OpenAI sought reconsideration or modification of preservation order, the Court denied the request without prejudice. Id., ECF No. 559. The Court noted that it was premature to decide proportionality and potential sampling bias until additional information was developed.

  1. A Negotiated “Sunset” and Privacy Carve-Outs

By October 2025, the parties agreed to wind down the broad preservation obligation. On October 9, 2025, the Court approved a stipulated modification that ended OpenAI’s ongoing preservation duty as of September 26, 2025, limited retention to already-segregated logs, excluded requests originating from the European Economic Area, Switzerland, and the United Kingdom for privacy compliance, and added targeted, domain-based preservation for select accounts listed in an appendix. Id., ECF No. 922.

This evolution — from blanket to targeted, time-limited preservation — shows courts’ willingness to adapt when parties document technical feasibility, privacy conflicts, and litigation need.

Implications For Companies

  1. Evidence vs. Privacy: Courts Expect You to Reconcile Both

These rulings show that courts will not accept “privacy law conflicts” as a stand-alone excuse to delete potentially relevant data. Instead, companies must show they can segregate, anonymize, or retain data while maintaining compliance. The OpenAI orders make clear: when evidence may be lost, segregation beats destruction.

  1. Proportionality Still Matters

Even as courts push for preservation, they remain attentive to proportionality. While early preservation orders may seem sweeping, judges are open to refining them once the factual record matures. Companies that track the cost, burden, and privacy impact of compliance will be best positioned to negotiate tailored limits.

  1. Preservation Is Not Forever

The October 2025 stipulation illustrates how to exit an indefinite obligation: offer targeted cohorts, geographic exclusions, and sunset provisions supported by a concrete record. Courts will listen if you bring data, not just arguments.

A Playbook for In-House Counsel

  1. Map Your AI Data Universe

Inventory all AI-related data exhaust: prompts, outputs, embeddings, telemetry, and retention settings. Identify controllers, processors, and jurisdictions.

  1. Build “Pause” Controls

Design systems capable of segregating or pausing deletion by user, region, or product line. This technical agility is key when a preservation order issues.

  1. Update Litigation Hold Templates for AI

Traditional holds miss ephemeral or system-generated data. Draft holds that instruct teams how to pause automated deletion while complying with privacy statutes.

  1. Propose Targeted Solutions

When facing broad discovery demands, offer alternatives: limit by time window, geography, or user cohort. Courts will accept reasonable, well-documented compromises.

  1. Build Toward an Off-Ramp

Preservation obligations can sunset — but only if supported by metrics. Track preserved volumes, costs, and privacy burdens to justify targeted, defensible limits.

Conclusion

The OpenAI orders reflect a new judicial mindset: preserve broadly first, negotiate smartly later. AI developers and data-driven businesses should expect similar directives in future litigation. Those that engineer for preservation flexibility, document privacy compliance, and proactively negotiate scope will avoid the steep costs of one-size-fits-all discovery — and may even help set the industry standard for balanced AI litigation governance.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress