Data Security And Privacy Liability – Takeaways From The Sedona Conference Working Group 11 Midyear Meeting In Ft. Lauderdale

By Justin R. Donoho

Duane Morris TakeawaysData privacy and data breach class action litigation continue to explode.  At the Sedona Conference Working Group 11 on Data Security and Privacy Liability, in Fort Lauderdale, Florida, on November 6-7, 2025, Justin Donoho of the Duane Morris Class Action Defense Group served as a moderator for a panel discussion, “Legislative Drafting Considerations: Lessons from Colorado’s Privacy and AI Law Intersection.”  The working group meeting, which spanned two days and had over 40 participants, produced excellent dialogues on this topic and others including website advertising technologies, judicial perspectives on privacy and data breach litigation, onward transfer of consumer PII in M&A and bankruptcy contexts, venue, forum, and choice of law in privacy and data breach class actions, privacy and data security regulator roundtable, revisiting notice and consent for facial recognition, and application of attorney-client privilege in the cybersecurity context.

The Conference’s robust agenda featured dialogue leaders from a wide array of backgrounds, including government officials, industry experts, federal and state judges, in-house attorneys, cyber and data privacy law professors, plaintiffs’ attorneys, and defense attorneys.  In a masterful way, the agenda provided valuable insights for participants toward this working group’s mission, which is to identify and comment on trends in data security and privacy law, in an effort to help organizations prepare for and respond to data breaches, and to assist attorneys and judicial officers in resolving questions of legal liability and damages.

Justin had the privilege of speaking about lessons from the intersection of the Colorado Privacy Act (CPA) and Colorado AI Act (CAIA) and how these lessons might guide future legislatures when drafting AI and data privacy statutes.  Highlights from his presentation included identifying lessons learned from the intersection of the CPA and CAIA and, among them, discussing some of the human steps a company may perform in using an AI hiring tool to avoid triggering the CPA’s opt-out right in factual scenarios where that right might apply, as those human steps are discussed in his article, “Five Human Best Practices to Mitigate the Risk of AI Hiring Tool Noncompliance with Antidiscrimination Statutes,” Journal of Robotics, Artificial Intelligence & Law, Volume 8, No. 4, July-August 2025.

Finally, one of the greatest joys of participating in Sedona Conference meetings is the opportunity to draw on the wisdom of fellow presenters and other participants from around the globe.  Highlights included:

  1. Experts of all stripes presenting a draft opus on advertising technologies that describes ways our laws could move beyond outdated statutes with draconian statutory penalties by focusing instead on any actual harms resulting from such technologies.
  2. A lively dialogue among my panelists and other participants dissecting the Colorado Privacy Act, Colorado AI Act, and those statutes’ application to AI hiring tools in an effort to offer guidance to future legislators drafting similar statutes.
  3. Federal and state judges offering tips for advocacy when presenting technical cybersecurity and data privacy issues to the court.
  4. Panelists with different backgrounds discussing the law regarding when a company that has obtained personal data with consent can and cannot transfer the data in M&A and bankruptcy contexts.
  5. Litigators from both sides of the “v.” debating venue, forum, choice of law, MDL, and CAFA issues in the context of privacy and data breach class actions.
  6. State regulators discussing their increasing data privacy and cybersecurity departments and priorities for enforcement in these areas. 
  7. Data privacy lawyers and experts discussing the evolution of facial recognition technology and the need to tailor notice and consent processes to risks associated with the technologies and use cases involved.
  8. Cybersecurity lawyers and experts discussing best practices for maintaining attorney-client privilege when responding to a cybersecurity incident.

Thank you to the Sedona Conference Working Group 11 and its incredible team, the fellow dialogue leaders, the engaging participants, and all others who helped make this meeting in Fort Lauderdale, Florida, an informative and unforgettable experience.

For more information on the Duane Morris Class Action Group, including its Data Privacy Class Action Review e-book, and Data Breach Class Action Review e-book, please click the links here and here.

California Federal Court Dismisses Adtech Class Action For Failure To Specify Highly Offensive Invasion Of Privacy

By Gerald L. Maatman, Jr., Justin R. Donoho, Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  On October 30, 2025, in DellaSalla, et al. v. Samba TV, Inc., 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025), Judge Jacqueline Scott Corley of the U.S. District Court for the Northern District of California dismissed a complaint brought by TV viewers against a TV technology company alleging that the company’s provision of advertising technology in the plaintiffs’ smart TVs committed the common law tort of invasion of privacy and violated the Video Privacy Protection Act (“VPPA”), the California Invasion of Privacy Act (“CIPA”), and California’s Comprehensive Computer Data Access and Fraud Act (“CDAFA”).  The ruling is significant as it shows that in the hundreds of adtech class actions across the nation alleging that adtech violates privacy laws, plaintiffs do not plausibly state a common law claim for invasion of privacy unless they specify in the complaint the information allegedly disclosed and explain how such a disclosure was highly offensive.  The case is also significant in that it shows that the VPPA does not apply to video analytics companies, and that California privacy statutes do not apply extraterritorially to plaintiffs located outside California.

Background

This case is one of a legion of class actions that plaintiffs have filed nationwide alleging that third-party technology captured plaintiffs’ information and used it to facilitate targeted advertising. 

This software, often called advertising technologies or “adtech,” is a common feature of millions of consumer products and websites in operation today.  In adtech class actions, the key issue is often a claim brought under a federal or state wiretap act, a consumer fraud act, or the VPPA, because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including retailers, consumer products, universities, and the adtech companies themselves.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided. 

In DellaSalla, the plaintiffs brought suit against a TV technology company that embedded a chip with analytics software in plaintiffs’ smart TVs.  Id. at *1, 5.  According to the plaintiffs, the company intercepted the plaintiffs’ “private video-viewing data in real time, including what [t]he[y] watched on cable television and streaming services,” and tied this information to each plaintiff’s unique anonymized identifier in order to “facilitate targeted advertising,” all allegedly without the plaintiffs’ consent.  Id. at *1.  Based on these allegations, the plaintiffs claimed that the TV technology company violated the CIPA, CDAFA, and VPPA, and committed the common-law tort of invasion of privacy. 

The company moved to dismiss, arguing that the CIPA and CDAFA did not apply because the plaintiffs were located outside California, that the VPPA did not apply because the TV technology company was not a “video tape service provider,” and that the plaintiffs failed to plausibly allege a highly offensive violation of a privacy interest.

The Court’s Decision

The Court agreed with the TV technology company and dismissed the complaint in its entirety, with leave to amend any existing claims but not to add any additional claims without further leave.

On the CIPA and CDAFA claims, the Court found that the plaintiffs did not allege that any unlawful conduct occurred in California.  Instead, the plaintiffs alleged that the challenged conduct occurred in their home states of North Carolina and Oklahoma.  Id. at *1, 3-4.  For these reasons, the Court dismissed the CIPA and CDAFA claims, finding that these statutes do not apply extraterritorially.  Id.

On the VPPA claim, the Court addressed the VPPA’s definition of  “video tape service provider,” which is “any person, engaged in the business … of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.”  Id. at *5.  The plaintiffs argued that the TV technology company was a video tape service provider “because its technology is incorporated in Smart TVs, which deliver prerecorded videos.  [The defendant] advertises its technology precisely as providing a ‘better viewing experience’ ‘immersive on-screen experiences’ and a ‘more tailored ad experience’ through its technology.”  Id.  The Court rejected this argument. It held that “[t]his allegation does not plausibly support an inference, [the defendant]—an analytics software provider—facilitated the exchange of a video product. Rather, the allegations support an inference [the defendant] collected information about Plaintiffs’ use of a video product, but not that it provided the product itself.”  Id. (emphasis added).

On the common law claim for invasion of privacy, the TV technology company argued that this claim failed because the plaintiffs “have no expectation of privacy in the information it collects and Plaintiffs have not alleged a highly offensive intrusion.”  In examining this argument, the Court noted that Plaintiff had only provided “vague references” to the information supposedly intercepted.  Id. at *4.  This information included video-viewing data generally (none specified) tied to an anonymized identifier.  Id. at *1, 5.  Thus, the Court agreed with the defendant’s argument and found that plaintiffs identified “no embarrassing, invasive, or otherwise private information collected” and no explanation of how the tracking of video viewing history with an anonymized ID caused plaintiffs “to experience any kind of harm that is remotely similar to the ‘highly offensive’ inferences or disclosures that were actionable at common law.”  Id. at *5.  In sum, the Court concluded that “Plaintiffs have not plausibly alleged a highly offensive violation of a privacy interest.”

Implications For Companies

DellaSala provides powerful precedent for any company opposing adtech class action claims (1) brought under statutes enacted in states other than the plaintiffs’ place of residence; (2) brought under the federal VPPA where the company allegedly transmitted video usage information, as opposed to any videos themselves; and (3) alleging common-law invasion of privacy, where the plaintiffs have not specified the information disclosed and why such a disclosure is highly offensive. 

The last point is a recurring theme in adtech class actions.  Just as this plaintiff suing a TV technology company did not plausibly state a common-law claim for invasion of privacy without identifying the videos watched and any highly offensive harm in associating those videos with an anonymized ID, so did a plaintiff not plausibly state a claim for invasion of privacy by way of alleging adtech’s disclosure of protected health information (“PHI”), without specifying the PHI allegedly disclosed (as we blogged about here).  These cases show that for adtech plaintiffs to plausibly plead claims for invasion of privacy, they at least need to identify what allegedly private information was disclosed and explain how the alleged disclosure was highly offensive.

New York Federal Court’s OpenAI Discovery Orders Provide Key Insights For Companies Navigating AI Preservation Standards

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: In a series of discovery rulings in the case of In Re OpenAI, Inc. Copyright Infringement Litigation, No. 23 Civ. 11195 (S.D.N.Y.), Magistrate Judge Ona T. Wang issued a series of orders that signal how courts are likely to approach AI data, privacy, and discovery obligations. Judge Wang’s orders illustrate the growing tension between AI system transparency and data privacy compliance – and how courts are trying to balance them.

For companies that develop or use AI, these rulings highlight both the risk of expansive preservation demands and the opportunity to share proportional, privacy-conscious discovery frameworks. Below is an overview of these decisions and the takeaways for in-house counsel, privacy officers, and litigation teams.

Background

In May 2025, the U.S. District Court for the Southern District of New York issued a preservation order in a copyright action challenging the use of The New York Times’ content to train large language models. The order required OpenAI to preserve and segregate certain output log data that would otherwise be deleted. Days later, the Court denied OpenAI’s motion to reconsider or narrow that directive. By October 2025, however, the Court approved a negotiated modification that terminated OpenAI’s ongoing preservation obligations while requiring continued retention of the already-segregated data.

The Court’s Core Rulings

  1. Forward-Looking Preservation Now, Arguments Later

On May 13, 2025, the Court entered an order requiring OpenAI to preserve and segregate output log data that would otherwise be deleted, including data subject to user deletion requests or statutory erasure rights. See id., ECF No. 551. The rationale: once litigation begins, even transient data can be critical to issues like bias and representativeness. The Court stressed that it was too early to weigh proportionality, so preservation would continue until a fuller record emerged.

  1. Reconsideration Denied, Preservation Continues

A few days later, when OpenAI sought reconsideration or modification of preservation order, the Court denied the request without prejudice. Id., ECF No. 559. The Court noted that it was premature to decide proportionality and potential sampling bias until additional information was developed.

  1. A Negotiated “Sunset” and Privacy Carve-Outs

By October 2025, the parties agreed to wind down the broad preservation obligation. On October 9, 2025, the Court approved a stipulated modification that ended OpenAI’s ongoing preservation duty as of September 26, 2025, limited retention to already-segregated logs, excluded requests originating from the European Economic Area, Switzerland, and the United Kingdom for privacy compliance, and added targeted, domain-based preservation for select accounts listed in an appendix. Id., ECF No. 922.

This evolution — from blanket to targeted, time-limited preservation — shows courts’ willingness to adapt when parties document technical feasibility, privacy conflicts, and litigation need.

Implications For Companies

  1. Evidence vs. Privacy: Courts Expect You to Reconcile Both

These rulings show that courts will not accept “privacy law conflicts” as a stand-alone excuse to delete potentially relevant data. Instead, companies must show they can segregate, anonymize, or retain data while maintaining compliance. The OpenAI orders make clear: when evidence may be lost, segregation beats destruction.

  1. Proportionality Still Matters

Even as courts push for preservation, they remain attentive to proportionality. While early preservation orders may seem sweeping, judges are open to refining them once the factual record matures. Companies that track the cost, burden, and privacy impact of compliance will be best positioned to negotiate tailored limits.

  1. Preservation Is Not Forever

The October 2025 stipulation illustrates how to exit an indefinite obligation: offer targeted cohorts, geographic exclusions, and sunset provisions supported by a concrete record. Courts will listen if you bring data, not just arguments.

A Playbook for In-House Counsel

  1. Map Your AI Data Universe

Inventory all AI-related data exhaust: prompts, outputs, embeddings, telemetry, and retention settings. Identify controllers, processors, and jurisdictions.

  1. Build “Pause” Controls

Design systems capable of segregating or pausing deletion by user, region, or product line. This technical agility is key when a preservation order issues.

  1. Update Litigation Hold Templates for AI

Traditional holds miss ephemeral or system-generated data. Draft holds that instruct teams how to pause automated deletion while complying with privacy statutes.

  1. Propose Targeted Solutions

When facing broad discovery demands, offer alternatives: limit by time window, geography, or user cohort. Courts will accept reasonable, well-documented compromises.

  1. Build Toward an Off-Ramp

Preservation obligations can sunset — but only if supported by metrics. Track preserved volumes, costs, and privacy burdens to justify targeted, defensible limits.

Conclusion

The OpenAI orders reflect a new judicial mindset: preserve broadly first, negotiate smartly later. AI developers and data-driven businesses should expect similar directives in future litigation. Those that engineer for preservation flexibility, document privacy compliance, and proactively negotiate scope will avoid the steep costs of one-size-fits-all discovery — and may even help set the industry standard for balanced AI litigation governance.

California Federal Court Narrows CIPA “In-Transit” Liability for Common Website Advertising Technology and Urges Legislature to Modernize Privacy Law

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 17, 2025, in Doe v. Eating Recovery Center LLC, No. 23-CV-05561, ECF 167 (N.D. Cal. Oct. 17, 2025), Judge Vince Chhabria of the U.S. District Court for the Northern District of California granted summary judgment to Eating Recovery Center, finding no violation of the California Invasion of Privacy Act (CIPA) where the Meta Pixel collected website event data. Specifically, the Court held that Meta did not “read” those contents while the communications were “in transit.” In so holding, the Court applied the rule of lenity, construed CIPA narrowly, and urged the California Legislature “to step up” and modernize the statute for the digital age. Id. at 2.

This decision is significant because Judge Chhabria candidly described CIPA as “a total mess,” noting it is often “borderline impossible” to determine whether the law – enacted in 1967 to criminalize wiretapping and eavesdropping on confidential communications – applies to modern internet transmissions. Id. at 1. As the Court observed, CIPA “was a mess from the get-go, but the mess gets bigger and bigger as the world continues to change and as courts are called upon to apply CIPA’s already-obtuse language to new technologies.” Id.  This is a “must read” decision for corporate counsel dealing with privacy issues and litigation.

Background

This class action arose after plaintiff, Jane Doe, visited Eating Recovery Center’s (ERC) website to research anorexia treatment and later received targeted advertisements. Plaintiff alleged that ERC’s use of the Meta Pixel caused Meta to receive sensitive URL and event data from her interactions with ERC’s site, resulting in targeted ads related to eating disorders.

ERC had installed the standard Meta Pixel on its website, which automatically collected page URLs, time on page, referrer paths, and certain click events to help ERC build custom audiences for advertising. Id. at 3. Plaintiff alleged that ERC’s use of the Pixel allowed Meta to intercept her communications in violation of CIPA, Cal. Penal Code § 631(a). She also brought claims under the California Medical Information Act (CMIA), the California Unfair Competition Law (UCL), and for common law unjust enrichment. The UCL claim was dismissed at the pleading stage.

ERC later moved for summary judgment on the remaining CIPA, CMIA, and unjust enrichment claims. In a separate order, the Court granted summary judgment on the CMIA and unjust enrichment claims, finding that plaintiff was not a “patient” under the CMIA and that there was no evidence ERC had been unjustly enriched. See id., ECF 168 at 1-2.

The Court’s Decision

With respect to the CIPA claim, the parties disputed two elements under CIPA § 631(a): (1) whether the event data obtained by Meta constituted “contents” of plaintiff’s communication with ERC, and (2) whether Meta read, attempted to read, or attempted to learn those contents while they were “in transit.” ECF 167 at 6.

The Court first held that URLs and event data can constitute the “contents” of a communication because they can reveal substantive information about a user’s activities – such as researching medical treatment. Id. at 7. The court thus deviated from other courts that have held differently on this particular issue when considering additional facts or allegations not addressed by this court (such as encryption, and inability to reasonably identify the data among lines of code).  However, the Court concluded that Meta did not read or attempt to learn any contents while the communications were “in transit.” Instead, Meta processed the data only after it had reached its intended recipient (i.e., ERC, the website operator).

In reaching that conclusion, Judge Chhabria relied on undisputed testimony about Meta’s internal filtering processes: “Meta’s corporate representative testified that, before logging the data that it obtains from websites, Meta filters URLs to remove information that it does not wish to store (including information that Meta views as privacy protected).” Id. at 8.

This evidence supported the finding that Meta’s conduct involved post-receipt filtering rather than contemporaneous “reading” or “learning.” Id. at 9. The Court emphasized that expanding “in transit” to include post-receipt processing would improperly criminalize routine website analytics practices. Because CIPA is both a criminal statute and a source of punitive civil penalties, the Court applied the rule of lenity to adopt a narrow interpretation. Id. at 11-12. The Court further cautioned that an overly broad reading would render CIPA’s related provision (§ 632, prohibiting eavesdropping and recording) largely redundant. Id. at 10.

Finding that Meta did not read, attempt to read, or attempt to learn the contents of Doe’s communications while they were in transit, the court granted summary judgment to ERC on the CIPA claim. Id. at 12.

The opinion concluded by reiterating that California’s decades-old wiretap law is “virtually impossible to apply [] to the online world,” urging the Legislature to “go back to the drawing board on CIPA,” and suggesting that it “would probably be best to erase the board entirely and start writing something new.” Id.

Implications For Companies

The Doe decision narrows one significant avenue for CIPA liability, particularly for routine use of website analytics and advertising pixels. The Northern District of California has now drawn a distinction between data “read” while in transit and data processed after receipt, significantly reducing immediate CIPA exposure for standard web advertising tools.

At the same time, the court’s reasoning underscores that pixel-captured data may be considered by some courts as “contents” of a communication under CIPA, although there is a split of authority on this issue. Companies could therefore face potential exposure under other California privacy statutes, including the CMIA, the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA), depending on the data involved and how it is used.

Organizations should continue to inventory the data they share through advertising technologies, minimize sensitive information in URLs, and ensure clear and accurate privacy disclosures. Because the court expressly invited legislative reform, companies should also monitor ongoing case law and potential statutory amendments.

Ultimately, Doe v. Eating Recovery Center reflects a pragmatic narrowing of CIPA’s “in transit” requirement while reaffirming that CIPA was not intended to cover common website advertising technologies or, in any event, should not be interpreted as such given the harsh statutory penalties involved and the rule of lenity — like the Supreme Judicial Court of Massachusetts concluded regarding Massachusetts’ wiretap act, as we previously blogged about here.  While this case is a big win for website operators, companies relying on third-party analytics should treat this decision as guidance—not immunity—and continue adopting privacy-by-design principles in their data collection and vendor management practices.

Illinois Federal Court Finds “Self-Inflicted Injury” Insufficient To Confer Article III Standing In Publicity Class Action Lawsuit

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 2, 2025, in Azuz v. Accucom Corp. d/b/a InfoTracer, No. 21-CV-01182, 2025 U.S. Dist. LEXIS 195474 (N.D. Ill. Oct. 2, 2025), Judge LaShonda A. Hunt of the U.S. District Court for the Northern District of Illinois dismissed a class action complaint alleging violations of the Illinois Right of Publicity Act (IRPA). The plaintiff claimed that InfoTracer unlawfully used individuals’ names and likeness to advertise and promote its products without consent. The Court held that the Plaintiff lacked Article III standing because she failed to plausibly allege a concrete injury – her only alleged harm was “self-inflicted,” as no one other than her own counsel ever searched her name on the site.

The decision illustrates that plaintiffs bringing right of publicity claims against website operators must show that a third party actually accessed their information for a commercial purpose. Mere availability of an individual’s information on a website, without evidence of third-party viewing, does not establish a concrete injury under Article III.

Background

Plaintiff Marilyn Azuz filed a putative class action complaint against Accucom Corp. d/b/a InfoTracer, which operates infotracer.com, a website selling personal background reports. She alleged that Accucom used her name and likeness to advertise and promote its products without written consent, in violation of the IRPA. Id. at *2-4. Plaintiff sought damages and injunctive relief barring Accucom from continuing the alleged conduct. Id. at *4.

After three years of litigation and discovery, Accucom moved to dismiss for lack of subject matter jurisdiction, raising a factual challenge to Article III standing. Accucom submitted evidence showing that the only search of Plaintiff’s name on InfoTracer occurred in February 2021, when her own counsel accessed the site after she responded to a Facebook solicitation by her counsel about potential claims. Accucom argued that such a “self-inflicted” search could not establish a concrete injury and that Plaintiff’s claim for injunctive relief was moot because she had since moved to Minnesota and her data had been removed from the site.

Plaintiff countered that her identify being “held out” to be searched constituted a sufficient injury, and that her request for injunctive relief was not moot Accucom could resume the alleged conduct.

The Court’s Decision

The Court sided with Accucom, holding that the Plaintiff failed to establish a concrete injury and therefore lacked standing to pursue her individual claims. Id. at *15.

Relying on the U.S. Supreme Court’s decision in TransUnion LLC v. Ramirez, 594 U.S. 413 (2021), Judge Hunt explained that an intangible statutory violation, without evidence of concrete harm, is insufficient for Article III standing.  Just as inaccurate information in a credit file causes no concrete injury unless disclosed to a third party, the Court concluded, “a person’s identity is not appropriated under the IRPA unless it is used for a commercial purpose.” Id. at *14.

The Court rejected Plaintiff’s reliance on Lukis v. Whitepages Inc., 549 F. Supp. 3d 798 (N.D. Ill. 2021), noting that Lukis involved only a facial attack to standing at the pleading stage, not a factual attack supported by evidence, like here. Id. at *9-10.

Noting that it had not found any post-TransUnion decisions analyzing the IRPA under a factual challenge to standing, Judge Hunt found Fry v. Ancestry.com Operations Inc., 2023 U.S. Dist. LEXIS 50330 (N.D. Ind. Mar. 24, 2023) to be instructive. Id. at *11. In Fry, the court cautioned that a plaintiff asserting a right of publicity claim must ultimately produce evidence showing that his likeness was viewed by someone other than his attorney or their agents. That same “forewarning,” Judge Hunt concluded, applied to Plaintiff, who presented no such evidence. Id. at *12-13.

The Court also dismissed Plaintiff’s request for injunctive relief, holding that any potential future harm was speculative and not sufficiently imminent. Because Plaintiff had relocated to Minnesota, the IRPA’s extraterritorial application could not extend to her circumstances. Id. at *16.

Finally, the Court declined to allow the substitution of new named plaintiffs so that the case could continue, reasoning that because the original plaintiff lacked standing from the outset, the Court never had jurisdiction to allow substitution. Id. at *17.

Implications For Companies

Azuz underscores the importance of scrutinizing Article III standing in every stage of litigation, particularly in statutory publicity and privacy cases. Where plaintiffs cannot show that a third party viewed or interacted with their data, courts are likely to find no concrete injury — and therefore no federal jurisdiction.

Website operators facing IRPA or similar publicity-based class actions should consider asserting factual standing challenges supported by evidence demonstrating the absence of third-party access. Such jurisdictional defenses can be decisive and may be raised at any time in the litigation.

Hospital Defeats Wiretap Adtech Class Action After Texas Federal Court Finds No Knowing Disclosure Of Protected Health Information

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: On September 22, 2025, in Sweat v. Houston Methodist Hospital, No. 24-CV-00775, 2025 U.S. Dist. LEXIS 185310 (S.D. Tex. Sept. 22, 2025), Judge Lee H. Rosenthal of the U.S. District Court for the Southern District of Texas granted a motion for summary judgment in favor of a hospital accused of violating the federal Wiretap Act through its use of website advertising technology. This decision is significant. In the wave of adtech class actions seeking millions – sometimes billions – in statutory damages under the Wiretap Act and similar statutes, the Court held that the Act’s steep penalties (up to $10,000 per violation) were not triggered because the hospital did not knowingly transmit protected health information.

Background

This case is part of a rapidly growing line of class actions alleging that website advertising tools – such as the Meta Pixel, Google Analytics, and other similar website advertising technology, or “adtech,” –secretly capture users’ web-browsing activity and share it with third-party advertising platforms.

Adtech is ubiquitous, embedded on millions of websites. Plaintiffs’ lawyers frequently invoke the federal Wiretap Act, the Video Privacy Protection Act (VPPA), state invasion-of-privacy statutes like the California Invasion of Privacy Act (CIPA), and even the Illinois Genetic Information Privacy Act (GIPA). Their theory is straightforward: multiply hundreds of thousands of website visitors by $10,000 per alleged Wiretap Act violation and the potential damages skyrocket. While some of these class actions have resulted in multi-million-dollar settlements, others have been dismissed (as we blogged about here), and the vast majority remain pending. With some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Sweat, the plaintiffs sued a hospital, seeking to represent a class of patients whose personal health information was allegedly disclosed by the Meta Pixel installed on the hospital’s website. The district court granted the hospital’s motion to dismiss the state law invasion of privacy claim but allowed the Wiretap Act claim to proceed to discovery. The hospital then moved for summary judgment, arguing that the Wiretap Act’s crime-tort exception did not apply because the hospital lacked knowledge that it was disclosing protected health information.

Under the Wiretap Act, “party to the communication” cannot be sued unless it intercepted the communication “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This provision is commonly called the “crime-tort exception.” The plaintiffs pointed to alleged violations of the Health Insurance Portability and Accountability Act (HIPAA) as the predicate crime to trigger this exception.

The Court’s Decision

The Court agreed with the hospital and granted summary judgment, holding that the record contained no evidence that the hospital acted with the “purpose of committing any criminal or tortious act” that would trigger the crime-tort exception. 2025 U.S. Dist. LEXIS 185310, at *13.

As the Court explained, case law authorities have developed two different approaches to determine “purpose” under the crime-tort exception. Some courts use the “independent act” approach, under which the unlawful act must be independent of the interception itself. Other courts have used the “primary purpose” approach, under which the defendant’s primary motivation must be to commit a crime or tort.

Applying the “primary purpose” approach, the Court found “no evidence that [the hospital] acted with the purpose of violating HIPAA…the evidence shows that it did not know it was doing so.” Id. at *13. In so holding, the Court cited to the fact that, although the Pixel was installed on “arguably sensitive portions” of the hospital’s website, the hospital received only aggregated, anonymized data, and there was no proof it knew any protected health information was being disclosed. Id. at *13-14. The Court rejected the plaintiffs’ argument that anonymized aggregate data necessarily originates from identifiable data, emphasizing that Meta’s algorithm could anonymize data “at the input level,” preventing the hospital from receiving identifiable data in the first place. Id. at *16.

Implications For Companies

The Court’s holding in Sweat is a significant win for healthcare providers and other defendants facing adtech class actions. This ruling reinforces two key principles. First, knowledge is critical. Like the Wiretap Act’s HIPAA-based crime-tort exception, similar statutes such as the VPPA require a knowing disclosure of identifiable information. If a defendant lacks knowledge that data is tied to specific individuals, liability should not attach. Second, anonymization matters. Where transmissions are encrypted, anonymized, or otherwise inaccessible at the point of input, there may be no “disclosure” at all.

For example, the VPPA requires disclosure of a person’s specific video-viewing activity, and GIPA requires disclosure of an identified individual’s genetic information. When adtech merely sends anonymized or encrypted data to third-party algorithms—data that cannot be traced back to a specific person—there is no knowing disclosure.

Sweat provides strong authority for defendants to argue that anonymized adtech transmissions cannot satisfy the statutory knowledge requirements of the Wiretap Act’s HIPAA-based crime-tort exception or similarly worded privacy statutes.

California Adopts New Rules Expanding The FEHA’s Reach To AI Tool Developers

By Gerald L. Maatman, Jr., Justin Donoho, and George J. Schaller

Duane Morris Takeaways: On October 1, 2025, California’s “Employment Regulations Regarding Automated-Decision Systems” will take effect.  These new AI employment regulations can be accessed here.  The regulations add an “agency” theory under the California Fair Employment and Housing Act (FEHA) and formalize this theory’s applicability to AI tool developers and companies employing AI tools that facilitate human decision making for recruitment, hiring, and promotion of job applicants and employees.  With California’s inclusion of a private right of action under the FEHA, these new AI employment regulations may augur an uptick in AI employment tool class actions brought under the FEHA.  This blog post identifies key provisions of this new law and steps employers and AI tool developers can take to mitigate FEHA class action risk.

Background 

In the widely-watched class action captioned Mobley v. Workday, No. 23-CV-770 (N.D. Cal.), the plaintiff alleges that an AI tool developer’s algorithm-based screening tools discriminated against job applicants on the basis of race, age, and disability in violation of Title VII of the Civil Rights Act of 1964 (“Title VII”), the Age Discrimination in Employment Act of 1967 (“ADEA”), the Americans with Disabilities Act Amendments Act of 2008 (“ADA”), and California’s FEHA.  Last year the U.S. District Court for the Northern District of California denied dismissal of the Title VII, ADEA, and ADA disparate impact claims on the theory that the developer of the algorithm was plausibly alleged to be the employer’s agent, and dismissed the FEHA claim which was brought only under the then-available theory of intentional aiding and abetting (as we previously blogged about here).

In recent years, discrimination stemming from AI employment tools has been addressed by other state and local statutes, including Colorado’s AI Act (CAIA) setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination,” New York City’s law regarding the use of automated employment decision tools, the Illinois AI Video Interview Act, and the 2024 amendment to the Illinois Human Rights Act (IHRA) to regulate the use of AI, with only the last of these laws providing for a private right of action (once it becomes effective January 1, 2026).

Key Provisions Of California’s AI Employment Regulations

California’s AI employment regulations amend and clarify how the FEHA applies to AI employment tools, thus constituting a new development in case theories available to class action plaintiffs regarding alleged harms stemming from AI systems and algorithmic discrimination.  

Employers and AI employment tool developers should take note of key provisions codified by California’s new AI employment regulations, as follows:

  • Agency theory.  An “agency” theory is added under the FEHA like the one that allowed the plaintiff in Mobley v. Workday to proceed past a motion to dismiss on his federal claims, whereby an AI tool developer may face litigation risk for developing algorithms that result in a disparate impact when the tool is used by an employer.  While Mobley v. Workday continues to proceed in the trial court, no appellate authority has yet had occasion to address the “agency” theories being litigated in that case under federal antidiscrimination statutes.  However, with the California AI employment regulations taking effect October 1, 2025, that theory is now expressly codified under the FEHA.  2 Cal. Code Regs § 11008(a).
  • Proxies for discrimination.  The regulations clarify that it is unlawful to use an employment tool algorithm that discriminates by using a “proxy,” which the regulations define as a “characteristic or category closely correlated with a basis protected by the Act.”  Id. §§ 11008(a), 11009(f).  While the regulations do not explicitly identify any proxies, proxies that have been identified in literature by the EEOC’s former Chief Analyst include zip code (this proxy is also codified in the IHRA), first name, alma mater, credit history, and participation in hobbies or extracurricular activities.
  • Anti-bias testing.  The regulations state that relevant to a claim of employment discrimination or an available defense are “anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such efforts, the results of such testing or other effort, and the response to the results.”  Id. § 11020(b).  Thus, for example, adoption of the NIST’s AI risk management framework, itself codified as a defense under the CAIA, could be a factor to consider as a defense under the FEHA.  Many other factors are pertinent with respect to anti-bias testing, including auditing, tuning, and the use of various interpretability methods and fairness metrics, discussed in our prior blog entry and article on this subject (here).
  • Data retention.  The regulations provide that employers, employment agencies, labor organizations, and apprenticeship training programs must maintain employment records, including automated-decision data, for a minimum of four years.  Id. § 11013(c).

Implications For Employers

California’s AI employment regulations increase employers’ and AI tool developers’ risks of facing class action lawsuits similar to Mobley v Workday and/or alleging discrimination under the FEHA.  However, developers and employers have several tools at their disposal to mitigate AI employment tool class action risk.  One is to ensure that AI employment tools comply with the FEHA provisions discussed above and with other antidiscrimination statutes.  Others include adding or updating arbitration agreements to mitigate the risks of mass arbitration; collaborating with IT, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and updating notices to third parties and vendor agreements.

Crypto Class Action Key Decisions and Trends in 2025

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Legal Intelligencer by Justin Donoho entitled “Crypto Class Action Key Decisions and Trends in 2025.”  The article is available here and is a must-read for corporate counsel involved with crypto and blockchain technologies.

This year has already been a busy one in the crypto class action litigation landscape.  It has seen several significant court decisions that have continued to shape the law in this growing area, including decisions on dispositive motions regarding whether various crypto transactions are sales of unregistered “securities” and, if so, whether the operator of a crypto exchange may be held liable for such transactions.  Two class certification split decisions were also issued, showing why claims for the sale of unregistered securities remain popular with the plaintiffs’ bar whereas other types of claims increasingly being brought the plaintiff’s bar face significant hurdles to class certification.  There have also been several multimillion-dollar crypto class action settlements.  In addition, dozens of new crypto class action cases have been filed, auguring a continued trend of further development in this area.  This article analyzes these key decisions and trends.

Implications For Corporations

With crypto assets continuing to proliferate and the current presidential administration reducing enforcement priorities relating to sales of crypto assets, crypto class action litigation is multiplying.  We should expect to see an upward trend of key decisions and new cases in the remainder of this year and beyond, as this burgeoning area of the law continues to unfold.

New York Federal Court Dismisses Adtech Class Action Because No Ordinary Person Could Identify Web User

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On September 3, 2025, in Golden v. NBCUniversal Media, LLC, No. 22-CV-9858, 2025 WL 2530689 (S.D.N.Y. Sept. 3, 2025), Judge Paul A. Engelmayer of the U.S. District Court for the Southern District of New York granted a motion to dismiss with prejudice for a media company on a claim that the company’s use of website advertising technology on its website violated the Video Privacy Protection Act (“VPPA”).  The ruling is significant as it shows that in the explosion of adtech class actions across the nation seeking millions or billions of dollars in statutory damages under not only the VPPA but also myriad other statutes providing for statutory penalties on similar theories that the website owner disclosed website activities to Facebook, Google, and other advertising agencies, the statute and its harsh penalties should not be triggered because no ordinary person could access and decipher the information transmitted.

Background

This case is one of a multiplying legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing activity and sent it to Meta, Google, and other online advertising agencies.

This software, often called website advertising technology or “adtech,” is a common feature on corporate, governmental, and other websites in operation today.  In adtech class actions, the key issue is often a claim brought under the VPPA, a federal or state wiretap act, a consumer fraud act, and even the Illinois Genetic Information Privacy Act (GIPA), because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they also have filed suits against companies that span nearly every industry including retailers, consumer products, and universities.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, the vast majority remain undecided, and especially with some district courts being more permissive than others in allowing adtech class actions to proceed beyond the motion to dismiss stage (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an alarming rate.

In Golden, the plaintiff brought suit against a media company.  According to the plaintiff, she signed up for an online newsletter offered by the media company and, thereafter, visited the media company’s website, where she watched videos.  Id. at *2-4.  The plaintiff further alleged that, after she watched those videos, her video-watching history was sent to Meta without her permission via the media company’s undisclosed use of the Meta Pixel on its website.  Id.  Like plaintiffs in most adtech class action complaints, this plaintiff: (1) alleged that before the company sent the web-browsing data to the online advertising agency (e.g., Meta), the company encrypted the data via the secure “https” protocol (id., ECF No. 56 ¶ 45); and (2) did not allege that any human had her encrypted web-browsing data or could retrieve it from the advertising agency’s algorithms or that even the advertising agency, or any other entity or person, has her web-browsing data stored or could retrieve it from the advertising agency’s algorithms in a decrypted (readable) format.  Based on the plaintiffs’ allegations, the plaintiff alleged a violation of the VPPA.

The media company moved to dismiss under Rule 12(b)(6), arguing that the media company did not adequately allege that the media company “disclosed” the plaintiff’s “personally identifiable information” (“PII”), defined under the VPPA as “information which identifies a person as having requested or obtained specific video materials or services….”  Id., 2025 WL 2530689, at *5-6.

The Court’s Decision

The Court agreed with the media company and held that the plaintiff failed plausibly to plead any unauthorized “disclosure.” 

As the Court explained, “PII, under the VPPA, has three distinct elements: (1) the consumer’s identity, (2) the video material’s identity, and (3) the connection between them.”  Id. at *6.  Moreover, PII “encompasses information that would allow an ordinary person to identify a consumer’s video-watching habits, but not information that only a sophisticated technology company could use to do so.”  Id. (emphasis in original).  Therefore, “to survive a motion to dismiss, a complaint must plausibly allege that the defendant’s disclosure of information would, with little or no extra effort, permit an ordinary recipient to identify the plaintiff’s video-watching habits.”  Id.  For these reasons, explained the Court, the Second Circuit has “effectively shut the door for Pixel-based VPPA claims.”  Id. at *7 (citing Hughes v. National Football League, 2025 WL 1720295 (2d Cir. June 20, 2025)).

Applying these standards, the Court dismissed the plaintiff’s VPPA claim with prejudice, holding that, “[i]n short, because the alleged disclosure could not be appreciated — decoded to reveal the actual identity of the user, and his or her video selections — by an ordinary person but only by a technology company such as Facebook, it did not amount to PII.”  Id. at *6-7.  In so holding, the Court cited an “emergent line of authority” shutting the door on VPPA claims not only in the Second Circuit but also in other U.S. Courts of Appeal.  See In Re Nickelodeon Consumer Priv. Litig., 827 F.3d 262, 283 (3d Cir. 2016) (affirming dismissal of VPPA case involving the use of Google Analytics, stating, “To an average person, an IP address or a digital code in a cookie file would likely be of little help in trying to identify an actual person”); Eichenberger v. ESPN, Inc., 876 F.3d 979, 986 (9th Cir. 2017) (affirming dismissal of VPPA case because “an ordinary person could not use the information that Defendant allegedly disclosed [a device serial number] to identify an individual”).

Implications For Companies

The Court’s holding in Golden is a win for adtech class action defendants and should be instructive for courts around the country addressing adtech class actions brought under not only the VPPA, but also other statutes prohibiting “disclosures,” and the like.  These statutes should be interpreted similarly to require proof that an ordinary person could access and decipher the web-browsing data, identify the person, and link the person to the data. 

Consider a few examples.  A GIPA claim requires proof of a disclosure or a breach of confidentiality and privilege.  An eavesdropping claim under the California Information of Privacy Act (CIPA) § 632 requires proof of eavesdropping.  A trap and trace claim under CIPA § 638.51 requires proof that the data captured is reasonably likely to identify the source of the data.  A claim under the Electronic Communications Privacy Act (ECPA) requires proof of an interception.

When adtech sends encrypted, inaccessible, anonymized transmissions to the advertising agency’s algorithms, has there been any disclosure or breach of confidentiality and privilege (GIPA), eavesdropping (CIPA § 632), data capture reasonably likely to identify the source (CIPA § 638.51), or interception (ECPA)?  Just as adtech transmissions are insufficient to amount to a disclosure under the VPPA, Golden shows neither should adtech transmissions trigger these similarly worded statutes because no ordinary person could access and decipher the data transmitted.

Best Practices To Mitigate The Risk Of Class Action Litigation Over AI Pricing Tool Noncompliance With Antitrust And AI Statutes

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Ten Design Guidelines to Mitigate the Risk of AI Pricing Tool Noncompliance with the Federal Trade Commission Act, Sherman Act, and Colorado AI Act.”  The article is available here and is a must-read for corporate counsel involved with development or deployment of AI pricing tools.

While artificial intelligence (AI) pricing tools can improve revenues for retailers, suppliers, hotel operators, landlords, ride-hailing platforms, airlines, ticket distributors, and more, designers and deployers of such tools increasingly face the risk of being targeted in lawsuits brought by governmental bodies and class action plaintiffs alleging unfair methods of competition in violation of the Federal Trade Commission (FTC) Act and agreements that restrain trade in violation of the federal Sherman Act.  This article identifies recently emerging trends in such lawsuits, including one currently on appeal in the U.S. Court of Appeals for the Third Circuit and three pending in district courts, draws common threads, and discusses ten guidelines that AI pricing tool designers should consider to mitigate the risk of noncompliance with the FTC Act, the Sherman Act, and Colorado AI Act.

Implications For Corporations

AI pricing tools designed to comply with antitrust and AI laws face fewer risks than those not designed for compliance, of an expensive class action lawsuit or government-initiated proceeding alleging violation of such laws.  Moreover, by enabling and automating informed pricing decisions, AI pricing tools hold the potential to drive market efficiencies.  This article identifies best practices to assist with such compliance and, relatedly, such market efficiencies.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress