New York Federal Court Certifies Crypto Class Action With Modifications And Reserves Causation Question For Summary Judgment Proceedings

By Gerald L. Maatman, Jr. and Justin R. Donoho

Duane Morris Takeaways:  On March 6, 2025, Judge Katherine Polk Failla of the U.S. District Court for the Southern District of New York granted class certification with modifications in a case involving a stablecoin issuer’s alleged issuance of unbacked or debased stablecoins in furtherance of an alleged scheme to manipulate the market prices for crypto commodities and futures in the litigation captioned In Re Tether & Bitfinex Crypto Asset Litigation, No. 19 Civ. 9236, 2026 WL 629826 (S.D.N.Y. Mar. 6, 2026).  The ruling is significant as it shows that while crypto purchasers who file class action complaints alleging violations of the Sherman Act and Commodities Exchange Act may be able to satisfy Rule 23 so long as they offer reliable expert models on class-wide causation and damages and limit their proposed classes to purchasers who used fiat currency or stablecoins to make their purchases on domestic or stateless exchanges, such class actions may also be subject to dismissal based on summary judgment on the question of whether the defendants’ alleged provision of unbacked or debased stablecoins caused an increase in price of crypto commodities and futures. 

Background

In the litigation captioned In Re Tether & Bitfinex Cryto Asset Litigation, the plaintiffs, four purchasers of Bitcoin, Bitcoin futures, and other crypto assets, brought a class action against various entities and individuals associated with the issuer of a stablecoin and the stablecoin issuer’s sister company, a crypto asset exchange, alleging that the defendants artificially inflated the prices of the plaintiffs’ crypto asset purchases by engaging in market manipulation under the Commodities Exchange Act and monopolization and restraint of trade under the Sherman Act.  Id. at *2, 26. 

According to the plaintiffs, the stablecoin issuer issued hundreds of millions of unbacked or debased stablecoins while telling the market that these stablecoins were fully backed by U.S. dollars whereas actually they were backed only by the sister crypto exchange’s accounts receivables and inaccessible funds.  Id. at *2-3.  Further according to plaintiffs, the defendants used an anonymous trader to engage in cross-exchange arbitrage by purchasing “massive” amounts of crypto commodities on other exchanges with the debased stablecoin, selling them on the defendant exchange for U.S. dollars, and withdrawing those funds as the stablecoin.  Id. at *4.  All these activities were allegedly performed by the defendants with knowledge and intent to inflate crypto commodity and futures prices and allegedly resulted in artificially inflated prices of crypto assets purchased by the plaintiffs.  Id.

The plaintiffs moved for class certification under Rule 23, seeking to certify classes of acquirers in the United States during the class period of crypto commodities and futures, respectively.  Id. at *5.  In support, the plaintiffs submitted a report from an antitrust and economics expert that included an event study purporting to show that the issuance of the unbacked or debased stablecoin caused the price of Bitcoin to increase, a regression analysis that purported to model how a change in the outstanding volume of the stablecoin affects Bitcoin prices, and an overcharge model that purported to quantify the artificial inflation of Bitcoin based on the extent to which the stablecoin was debased or unbacked.  Id. at *6.

The defendants moved to exclude the plaintiff’s expert and opposed class certification by challenging only adequacy and predominance (not any of the other Rule 23 requirements).  On adequacy, the defendants argued that adequacy was not satisfied due to two sources of potential intraclass conflict – intraclass trading and plaintiffs’ alternative models for showing debasement and inflation.   On predominance, the defendants argued that individual questions would predominate when resolving questions of class-wide impact, injury, and extraterritoriality.

The Court’s Decision

The Court began its analysis by excluding the plaintiffs’ expert’s event study purporting to show that the issuance of the unbacked or debased stablecoin caused the price of Bitcoin to increase.  As the Court explained, the event study was unreliable because the “t-test” model it employed violated the key assumption of the model “that the values within in each tested group are independent, meaning that they are not correlated with each other..  Id. at *6 n.5, 12-13.  However, the court denied exclusion of the expert’s regression model, overcharge model, and other opinions.  Id. at *14-19.

Turning next to the defendants’ two adequacy challenges, the Court rejected both.  First, the Court found that intraclass trading did not create any conflicts because the alleged classes included only buyers alleging only price inflation.  Id. at *23-24.  Second, the Court found that there were also no intraclass conflicts based on plaintiffs’ alternative methods for showing stablecoin debasement because the methods differed only “in the extent of the debasement they show on certain days, but they are not diametrically opposed. In fact, the debasement is, by default, one-directional.”  Id. at *25.

Turning to defendants’ challenges to predominance, the court found that common evidence would be used “to establish that Defendants engaged in certain conduct, such as issuing debased or unbacked [stablecoins], misrepresenting that [the stablecoins were] always backed one-to-one by USD held in reserve by [the defendant crypto exchange], disseminating debased [stablecoins] through the Anonymous Trader, and conspiring with the Anonymous Trader to increase cryptocommodity prices.”  Id. at *27.  The court also found that common evidence would be used for the elements relating to the defendants’ scienter or intent.  Id. at *27.  In sum, the Court found that common questions predominated as to “issues related to defendants’ anticompetitive conduct.”  Id.  However, as the Court explained, “the elements of antitrust and CEA cases that pertain to Defendants’ conduct almost always present a common question that predominates … Because of this, class certification in CEA and antitrust cases often turns on whether common issues predominate in establishing injury, causation, or damages.”  Id. at *26-27 (emphasis added). 

Next the Court found that the plaintiffs could demonstrate class-wide impact or causation through plaintiffs’ expert’s regression analysis, although the Court found this to be a “closer question.”  Id. at *28.  Although the defendants did not provide a sufficient reason to exclude the regression analysis such as the expert’s failure to account for a key variable, the Court found nevertheless that the defendants called into question the plaintiffs’ ability with its regression model to establish “the fact of causation.”  Id. at *28-30.  However, as the Court explained, “That type of challenge sounds more in summary judgment than in Rule 23(b)(3). Indeed, the Supreme Court has warned that when ‘the concern about the proposed class is not that it exhibits some fatal dissimilarity but, rather, a fatal similarity — [an alleged] failure of proof as to an element of the plaintiffs’ cause of action — courts should engage that question as a matter of summary judgment, not class certification.’”  Id. (quoting Tyson Foods, Inc. v. Bouaphakeo, 577 U.S. 442, 457 (2016)). 

Further, the Court found that the plaintiffs could measure damages on a class-wide basis using Plaintiffs’ overcharge model.  Id. at *31.

Finally, as to the defendants’ remaining challenges to predominance, the Court rejected them as to the predominance finding but embraced them for purposes of narrowing the Plaintiffs’ proposed class definitions in two ways. 

First, on the question of injury, the Court found that whether common issues predominate turns on whether injury occurs “(i) when a Class Member purchases an artificially inflated cryptocommodity, or (ii) when that Class Member experiences economic loss flowing from their purchase of that cryptocommodity.”  Id. at *31.  As the Court explained, whereas the defendants argued “that economic loss is required for Class Members to establish injury — like in the securities context,” Plaintiffs argued “that the magnitude of loss only matters for the calculation of damages — like in the antitrust context.”  Finding this issue “close,” the Court ruled for the plaintiffs, reasoning as follows: “The [d]efendants are correct that the Class Assets share more in common with securities than commodities such as olive oil, especially given that purchasers of cryptocommodities often sell them later, either at a loss or gain … But the Court ultimately sides with Plaintiffs because, at its core, this is an antitrust case, not a securities action. And unlike in securities cases, antitrust injury flows from the overcharge itself.”  Id. at *32.  The Court “remain[ed] concerned, however, that the initial harm that is required to establish an antitrust injury is not as clearcut for Class Members who purchased cryptocommodities with other cryptocommodities” because “whether that purchaser has incurred the required initial overcharge would depend on whether the purchasing cryptocommodity was more or less inflated than the purchased cryptocommodity.”  Id. at *33.  In addition, the court found no injury for any alleged class members who acquired class assets only by engaging in mining, using a crypto fork, or receiving them as gifts.  Id. at *33.  Thus, the Court limited the proposed classes of crypto commodity and futures acquirers to purchasers who used fiat currency or stablecoins.  Id.

Second, on the question of extraterritoriality, the Court found that “[o]n Plaintiffs’ CEA [Commodities Exchange Act] cause of action, individual questions predominate regarding futures trades on foreign exchanges” because “the domesticity of transactions on foreign exchanges is too fact-specific for class certification,” including “facts concerning the formation of the contracts, the placement of purchase orders, the passing of title, or the exchange of money.”  Id. at *34-36.  Foreign exchanges aside, the Court found that there are no individualized questions as to domesticity for futures transactions executed on domestic exchanges.”  Id.  Lastly, on the remaining question of whether stateless exchanges “are governed by the domestic exchange rule or foreign exchange rule,” which question the Court found “especially important in the context of the crypto-economy,” the Court held that this inquiry satisfied predominance because it “can be determined on an exchange-by-exchange, rather than person-to-person, basis.”  Id. at *35.  Accordingly, the Court limited the futures subclass to all purchasers of crypto commodity futures with fiat currency or stablecoins in the United states during the class period so long as they purchased futures on either U.S.-based exchanges or stateless exchanges “that either (a) matched trades on servers in the United States or (b) prohibited buyers from revoking their orders once placed.”  Id. at *38.

For these reasons, the Court granted the plaintiffs’ motion for class certification and narrowed the plaintiffs’ proposed class definitions.

Implications For Companies

The In Re Bitfinex class certification ruling is an instructive one for litigants on either side of crypto class actions alleging antitrust and commodities violations.  For plaintiffs, it shows that table stakes for achieving class certification of such claims include (a) proffering reliable models regarding class-wide causation and damages and (b) limiting class definitions to transactions that can use common evidence to satisfy the injury element and escape the extraterritoriality defense.  For defendants, it shows that if their challenges to plaintiffs’ causation and damages models are ineffective, then summary judgment remains as a vehicle to show the absence of sufficient evidence for the plaintiff to demonstrate causation of any purported antitrust or commodities injury due to defendants’ alleged conduct.

Massachusetts Federal Court Dismisses Adtech ECPA Class Action For Failure To Allege Defendants Purposefully Committed A Criminal Act, Furthering Split Of Authority

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: On March 6, 2026, in Progin v. UMass Memorial Health Care, Inc., No. 25-CV-40003, 2026 U.S. Dist. LEXIS 46522 (D. Mass. Mar. 6, 2026), Judge Allison D. Burroughs of the U.S. District Court for the District of Massachusetts granted a motion to dismiss a class action complaint brought by website users against Massachusetts health care and hospital entities. Plaintiffs alleged that the defendants’ use of website advertising technology (“adtech”) violated the federal Wiretap Act, also known as the Electronic Communications Privacy Act (“ECPA”).  Following another similar ruling in the same court,  see Goulart v. Cape Cod Healthcare, Inc., 2025 U.S. Dist. LEXIS 119435 (D. Mass. June 24, 2025),  the decision is significant because it reflects the Massachusetts Federal court’s alignment with other federal courts (including the U.S. District Court for the Southern District of Texas, as we blogged about here) that have interpreted the ECPA in a defense-friendly manner. In contrast, courts in other jurisdictions (including Illinois Federal courts, as we blogged about here) have adopted more plaintiff-friendly interpretations, further deepening the emerging split of authority in adtech privacy litigation.

Background

Progin is one of a legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in websites secretly captured plaintiffs’ web-browsing data and transmitted that data to Meta, Google, and other online advertising agencies and data analytics companies.

In these adtech and similar internet-based technology class actions, plaintiffs frequently rely on the ECPA’s statutory damages provision. Their theory is simple: multiply the number of website visitors – potentially hundreds of thousands – by $10,000 in statutory damages per claimant to produce enormous potential exposure. Although plaintiffs have filed a majority of these lawsuits to date against healthcare providers, they have filed suits against companies that span nearly every industry including education, retailers, and consumer products. Some of these cases have resulted in multimillion-dollar settlements, while others have been dismissed at the pleading stage (as we blogged about here) or the summary judgment stage (as we blogged about here), and the vast majority remain undecided.

In Progin, the plaintiffs sued a group of health care and hospital entities, seeking to represent a class of patients whose personal health information was allegedly disclosed by the Meta Pixel installed on defendants’ websites. The plaintiffs claimed that these alleged transmissions constituted an “interception” by defendants in violation of the ECPA.

Under the ECPA, a “party to the communication” generally cannot be sued unless it intercepted the communication “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This provision is commonly referred to as the “crime-tort exception.”

Plaintiffs argued that alleged violations of the Health Insurance Portability and Accountability Act (HIPAA) served as the predicate crime to trigger this exception. Specifically, plaintiffs argued that defendants were liable under the crime-tort exception because they intercepted and disclosed plaintiffs’ communications and personal information to third parties without consent in violation of HIPAA. 2026 U.S. Dist. LEXIS 46522, at *11.

The defendants moved to dismiss, arguing that the crime-tort exception did not apply because they did not install the Meta Pixel “for the distinct purpose of violating HIPAA or perpetrating a tort.” Id. at *11-12.

The Court’s Decision

The Court agreed with defendants and granted their motion to dismiss, holding that the amended complaint’s allegations “do not support the inference that Defendants purposefully committed the ‘criminal and tortious acts’ specified by Plaintiffs.” Id. at *13-14.

As the Court explained, based on the alleged predicate acts, plaintiffs were required to plausibly allege that defendants “purposefully used or caused to be used” plaintiffs’ unique health identifiers without authorization; “purposefully disclosed” plaintiffs’ individually identifiable health information to Facebook or Google without authorization; or “purposefully invaded” plaintiffs’ privacy.  Id. at *12-13.

Importantly, the Court emphasized that merely alleging that defendants knowingly committed such acts is insufficient because “‘purpose’ is an essential element of ECPA, distinct from the minimal intent [of knowingness] required under HIPAA.” Id. at *13 (quoting Doe v. Lawrence Gen. Hosp., 2025 U.S. Dist. LEXIS 195964, at *32 (D. Mass. Aug. 29, 2025)). The Court further explained that “[i]t is not enough that a crime or tort [may have been] a . . . side effect of the interception.” Id. at *14 (quoting Doe, 2025 U.S. Dist. LEXIS 195964, at *30).

Implications For Companies

The decision in Progin is a big win for healthcare providers and other defendants facing adtech class actions. This ruling reinforces a critical principle in ECPA and other privacy-based litigation: the defendants’ state of mind matters.

Under the ECPA’s HIPAA-based crime-tort exception, as well as under similar privacy statutes such as the Video Privacy Protection Act (“VPPA”), liability depends on the defendant’s knowledge and purpose. Where a defendant lacks knowledge that transmitted data is tied to specific individuals, or lacks the purpose to disclose identifiable information, the statutory requirements for liability may not be satisfied.

Accordingly, Progin provides strong authority for defendants to argue that routine adtech data transmissions cannot satisfy the purposeful intent requirements of the ECPA’s HIPAA-based crime-tort exception or similarly worded privacy statutes – a position that may prove critical as courts continue to confront the growing wave of adtech privacy class actions.

California Federal Court Orders Disclosure Of Side Deals In Connection With Class Action Settlement

By Gerald L. Maatman, Jr. and Justin R. Donoho

Duane Morris Takeaways:  On December 23, 2025, Judge William Alsup of the U.S. District Court for the Northern District of California entered an order in Bartz, et al. v. Anthropic PBC, Case No. 24-CV-5417 (N.D. Cal. Dec. 23, 2025), requiring five law firms seeking a fee award in connection with a class action settlement to file a declaration setting forth the full extent of any of the firms’ actual or proposed fee-sharing agreements and the extent to which any arrangement may result in some class members receiving a sweeter recovery than other class members.  Judge Alsup also ordered preservation of all communications and other documents relating to such side deals. 

The ruling is significant because it shows that only appointed class counsel may be eligible to receive a fee award in connection with a class action settlement, and may not outsource its responsibilities to non-appointed counsel or seek any other arrangements that may favor some class members to the detriment of other class members.  Furthermore, the ruling shows that any such side deals must be disclosed publicly prior to any final approval of a class action settlement.

Background

This case is one of several class actions that plaintiffs have filed alleging that developers of generative artificial intelligence  (“gen AI”) violated copyright laws by generating infringing outputs and/or by using unauthorized copies of copyrighted works as inputs to train the developer’s models. 

Many of these gen AI class actions are “bet-the-company” lawsuits, even for the world’s largest companies. Plaintiffs in gen AI class actions typically invoke the Copyright Act in order to seek millions — and sometimes even billions — of dollars on the theory that thousands or millions of unauthorized copies of copyrighted works, times up to $150,000 per copyrighted work for willful infringement, equals a crushing, settlement-inspiring number. 

In Bartz, the parties reached a $1.5 billion settlement, which the Court preliminary approved, and which we blogged about previously here and here.

Following preliminary approval, two law firms appointed as class counsel and three additional non-appointed firms filed a petition for fees to be awarded in connection with the class action settlement.  The fee petition sought $225 million for class counsel and $75 million for the non-appointed law firms.  Id. at 3, 7.  These three non-appointed firms had agreed to gather contact information for the class list and to provide input on the claim form and claims process, two for the publisher class members (“Publishers’ Coordination Counsel”), and one for the author class members (“Authors’ Coordination Counsel”).  Id. at 3.

The Court’s Decision

The Court declined to rule on the fee petition, ordering that a number of disclosures and preservation efforts be made first in order “to set the record straight” concerning aspects of the fee petition.  Id. at 1.  Such was necessary, according to the Court, because it appeared that counsel may have entered into one or more “side deals.”  Id. at 3.

As the Court explained, “[t]wo and only two law firms were ever appointed class counsel.”  Id. at 1.  Moreover, “preliminary approval and the class notices confirmed that only two firms were approved to serve the class … Those firms never proposed a fee splitting scheme, and none was ever even preliminarily approved.”  Id. at 7. 

As to the three non-appointed law firms, the Court found that they “cannot appoint [themselves] class counsel by showing up.  Nor can class counsel appoint someone else to do its work.”  Id. at 2.  As the Court further explained, it had not had a chance to vet the non-appointed counsel for conflicts, or to prevent duplication of effort by overlapping law firms.  Id. at 8.  In addition, the Court found it concerning that “we do not yet know whether ‘Publishers’ Coordination Counsel’ will share any part of their bonanza with one or more publishers so as to give those publishers a premium to not opt out … and thereby avoid triggering [the defendant]’s right to about the settlement.”  Id.  Furthermore, the class notice “never alerted class members that still other lawyers would come out of the woodwork to seek a third again whatever their class counsel would seek for its work.”  Id. (emphasis added).

For these reasons, the Court ordered that, within one week, all law firms who filed fee petitions or on whose behalf fee petitions were filed, must publicly file a declaration (not under seal) setting forth the “full extent” to which such firm agreed or made a proposal “to share any portion(s) of any fee award in this class action or in any other class action (putative or certified) involving any party (or class member) herein,” and stating as to each such agreement or proposal its date, terms, the extent to which it is verbal and the extent to which it is in writing (or in an email or text or other message), and the parties and the names of all persons who made the agreement.  Id. at 10.  The Court also ordered public disclosure in a declaration of the “full extent to which any arrangement has been made or proposed by which any class member would receive a sweeter recovery than other class members.”  Id. at 10-11.  Finally, the Court further ordered that “[a]ll emails, messages, and written materials relating to any of the above shall be preserved for future potential discovery.”  Id. at 11.

Implications For Companies

The Bartz fee petition order is as extraordinary as it is unique. It offers strong precedent for any company defending a large class action and preparing to enter into a class action settlement.  Specifically, Bartz shows that plaintiffs’ firms seeking any portion of a fee award in connection with such a settlement will need to publicly disclose any side deals prior to any final settlement approval.  Therefore, settling defendants should consider seeking to discover any side-deal information before entering into such settlement.  That way, any obstacles to final settlement approval such as that presented by the Bartz fee petition order might be considered before the parties reach any settlement.

Executive Order Signals A Push Toward A Single, Federal “AI Rulebook” And A Retreat From The State Patchwork

By Gerald L. Maatman, Jr., Justin R. Donoho, and Hayley Ryan

Duane Morris Takeaways:  On December 11, 2025, President Donald J. Trump signed Executive Order 14365 titled “Ensuring a National Policy Framework for Artificial Intelligence.” The Order targets what it characterizes as a “patchwork” of State-by-State AI regulation and directs federal agencies to pursue a more uniform, national framework. Rather than serving as a technical AI governance roadmap, the Order focuses on limiting State AI laws through federal funding leverage, potential preemption, and expanded use of FTC enforcement authority. The discussion below highlights the Order’s core objectives and key implications for companies and employers. The Executive Order is required reading for any organizations deploying AI or thinking of doing so.

The Executive Order’s Core Objectives

Reduce State AI Regulation By Framing It As A Competitiveness Problem

The Order emphasizes U.S. leadership in artificial intelligence and asserts that divergent State regulatory regimes increase compliance costs, especially for startups, and may impede innovation and deployment. It also raises concerns that certain State approaches could pressure companies to embed “ideological” requirements into AI systems.

Create Leverage Through Federal Funding: BEAD Broadband Money As The “Carrot And Stick”

Within 90 days, the Secretary of Commerce is directed to issue a policy notice describing the circumstances under which States may be deemed ineligible for certain broadband deployment funding under the Broadband Equity Access and Deployment (BEAD) program if they impose specified AI-related requirements. The notice is also intended to explain how fragmented State AI laws could undermine broadband deployment and high-speed connectivity goals.

Move Toward A Federal Reporting And Disclosure Standard

Within 90 days after the Order’s State-law “identification” process (discussed below), the Federal Communications Commission (FCC), in consultation with a Special Advisor for AI and Crypto, is instructed to consider whether to initiate a proceeding to adopt a federal reporting and disclosure standard for AI models that would preempt conflicting State requirements.

Use The FTC Act As An Enforcement Anchor And Tee Up Preemption Arguments

Within 90 days, the Federal Trade Commission (FTC) is directed, in consultation with other federal agencies, to issue a policy statement addressing how the FTC Act’s prohibition on unfair or deceptive acts or practices applies to AI models, with the express objective of preempting conflicting State laws.

Establish A Federal AI Litigation Task Force To Challenge State AI Laws

The Executive Order goes beyond policy statements and funding leverage by directing the Attorney General, within 30 days, to establish an AI Litigation Task Force dedicated exclusively to challenging State AI laws that conflict with the Order’s national policy objectives. The Task Force is authorized to pursue constitutional and preemption-based challenges, signaling an intent to bring coordinated, affirmative litigation against State AI regimes.

That enforcement effort is reinforced by a parallel State-law triage process. Within 90 days, the Secretary of Commerce must publish an evaluation identifying “onerous” State AI laws for potential challenge, particularly those that require AI systems to alter truthful outputs or compel disclosures that may implicate First Amendment or other constitutional concerns. Together, these provisions signal an intent to move quickly from policy articulation to test cases aimed at curbing State-level AI regulation.

Implications For Companies

Compliance Strategy May Shift, But Uncertainty Rises First

Although companies may welcome relief from conflicting State AI mandates, the Executive Order is likely to increase near-term uncertainty. Preemption disputes are likely, and the Order directs agency action rather than establishing a comprehensive statutory framework. Companies should avoid scaling back State-law compliance prematurely and should assume any federal override will be contested until resolved through rulemaking and litigation.

Class Action Exposure Will Shift, Not Disappear

Even if State AI laws are narrowed, plaintiffs’ lawyers are likely to pursue claims under more traditional theories, including consumer protection (particularly AI marketing and disclosure claims), employment discrimination, privacy and biometrics statutes, and contract or misrepresentation theories. The Order’s emphasis on FTC unfair and deceptive practices enforcement suggests that federal consumer protection standards may become the new focal point for both regulatory scrutiny and follow-on civil litigation.

Employment Risk Remains

Employers should expect ongoing scrutiny of AI use in hiring, promotion, and performance management, including disparate impact claims, vendor-liability arguments, and discovery disputes over model documentation, adverse impact analyses, and validation. Defensible governance, testing, and documentation remain critical.

Federal Contracting And Funding May Come With New AI Representations

If federal agencies adopt standardized AI disclosures, companies operating in regulated industries or participating in broadband initiatives may face new contract provisions governing AI use, along with enhanced reporting and audit obligations.

What Companies Should Do Now

Companies should begin by identifying where and how AI tools are being deployed, particularly in consumer-facing and employment-related contexts, and evaluating those uses under existing disclosure, privacy, and anti-discrimination laws. Public-facing statements about AI capabilities should be reviewed to ensure they are accurate and defensible, as increased regulatory and litigation focus on unfair or deceptive practices is likely to heighten scrutiny of AI-related claims. Companies should also review vendor relationships to confirm that contracts clearly address testing and validation obligations, incident response, audit rights, and appropriate allocation of risk for privacy and discrimination claims. Finally, organizations should remain prepared for continued regulatory change by maintaining State-law compliance readiness while monitoring federal agency actions that may shape a national AI framework.

Bottom Line

This Executive Order is a significant policy signal. The federal government is positioning itself to reduce State-by-State AI regulation and replace it with a framework centered on federal disclosure requirements and consumer protection enforcement. Companies should view the Order as an opportunity to prepare for a likely federal compliance baseline, without assuming State-law exposure will disappear in the near term.

Gen AI Key Decisions and Trends in 2025

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Legal Intelligencer by Justin Donoho entitled “Gen AI Class Action Key Decisions and Trends in 2025.”  The article is available here and is a must-read for corporate counsel involved with gen AI technologies.

This year has been a busy one in the generative artificial intelligence (gen AI) class action litigation landscape. New pleadings were filed, including several new class actions, several consolidated and amended complaints, and one appeal.  Several key decisions were issued, including a trio that formed a three-way split of authority on how to determine whether training a gen AI model on copyrighted materials constitutes “fair use” under the Copyright Act.  Additionally, one humongous settlement was reached.  Additional notable decisions issued in 2025 in gen AI class actions include a decision denying class certification on the basis of the class definition being defined as a “fail-safe” class, dispositive decisions defining the contours of claims alleging that gen AI developers violated the Digital Millenium Copyright Act, a decision on the copyrightability of voice in the context of voice cloning technology, and multiple additional decisions on motions to compel, further clarifying the scope of documents that may or may not be discoverable in gen AI class actions.  This article analyzes these key decisions and trends.

Implications For Corporations

With gen AI continuing to proliferate and the current presidential administration continuing the prior administration’s policy goals of sustaining and enhancing America’s global AI dominance, gen AI litigation is multiplying. We should expect to see an upward trend of key decisions and new cases in the remainder of this year and beyond as this burgeoning area of the law continues to unfold.

Third Circuit Affirms Dismissal Of CIPA Adtech Class Action Because A Party To A Communication Cannot Eavesdrop On Itself

By Gerald L. Maatman, Jr., Justin R. Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On November 13, 2025, in Cole, et al. v. Quest Diagnostics, Inc., 2025 U.S. App. LEXIS 29698 (3d Cir. Nov. 13, 2025), the U.S. Court of Appeals for the Third Circuit affirmed a ruling of the U.S. District Court for the District of New Jersey’s in dismissing a class action complaint brought by website users against a diagnostic testing company alleging that the company’s use of website advertising technology violated the California Invasion of Privacy Act (“CIPA”) and California’s Confidentiality of Medical Information Act (“CMIA”). 

The ruling is significant because it confirms two important principles: (1) CIPA’s prohibition against eavesdropping does not apply to an online advertising company, like Facebook, when it directly receives information from the users’ browser; and (2) the CMIA is not triggered unless plaintiffs plausibly allege the disclosure of substantive medical information.

Background

This case is one of a legion of nationwide class actions that plaintiffs have filed alleging that third-party technologies (“adtech”) captured user information for targeted advertising. These tools, such as the Facebook Tracking Pixel, are widely used across millions of consumer products and websites.

In these cases, plaintiffs typically assert claims under federal or state eavesdropping statutes, consumer protection laws, or other privacy statutes. Because statutes like CIPA allow $5,000 in statutory damages per violation, plaintiffs frequently seek millions, or even billions, in potential recovery, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $5,000 per claimant, equals a huge amount of damages. While many of these suits initially targeted healthcare providers, plaintiffs have sued companies across nearly every industry, including retailers, consumer products companies, universities, and the adtech companies themselves.

Several of these cases have resulted in multimillion-dollar settlements; others have been dismissed at the pleading stage (as we blogged about here) or at the summary judgment stage (as we blogged about here and here). Still, most remain undecided, and with some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Cole, the plaintiffs alleged that the defendant diagnostic testing company used the Facebook Tracking Pixel on both its general website and its password-protected patient portal.  Id. at *1-2.  According to the plaintiffs, when a user accessed the general website, the Pixel intercepted and transmitted to Facebook “the URL of the page requested, along with the title of the page, keywords associated with the page, and a description of the page.” Id. at *2-3. Likewise, when a user accessed the password-protected website, the Pixel allegedly transmitted the URL “showing, at a minimum, that a patient has received and is accessing test results.” Id. at *3.

Plaintiffs asserted that these transmissions constituted (1) a CIPA violation because the company supposedly aided Facebook in “intercepting” plaintiffs’ internet communications, and (2) a CMIA violation because the company allegedly disclosed URLs associated with webpages plaintiffs accessed to view test results along with plaintiffs’ identifying information linked to users’ Facebook accounts. Id. at *3.

The company moved to dismiss, and, in separate orders, the district court dismissed both claims. See 2024 U.S. Dist. LEXIS 116350; 2025 U.S. Dist. LEXIS 7205.

As to the CIPA claim, the district court found that CIPA “is aimed only at ‘eavesdropping, or the secret monitoring of conversations by third parties,’” and that Facebook was not a third party because it received information directly from plaintiffs’ browsers about webpages they visited. 2025 U.S. Dist. LEXIS 7205, at *7-8 (quoting In Re Google Inc. Cookie Placement Consumer Privacy Litig., 806 F.3d 125, 140-41 (3d Cir. 2015)).  As to the CMIA claim, the district court found that plaintiffs alleged only that the company disclosed that a patient accessed test results but not what kind of medical test was done or what the results were. 2024 U.S. Dist. LEXIS 116350, at *15. Accordingly, the district court held that plaintiffs failed to allege the disclosure of “substantive” medical information as required under the CMIA. Id.

Plaintiffs appealed both rulings.

The Court’s Decision

The Third Circuit affirmed. Id. at *1.

On the CIPA claim, the Third Circuit explained that “[a]s a recipient of a direct communication from Plaintiffs’ browsers, Facebook was a participant in Plaintiffs’ transmissions such that [the company] did not aid or assist Facebook in eavesdropping on or intercepting such communications, even if done without the users’ knowledge.” 2025 U.S. App. LEXIS 29698, at *6.  With no eavesdropping, “Plaintiffs’ CIPA claim was properly dismissed.” Id. at *7.

On the CMIA claim, the Third Circuit explained that “at most, Plaintiffs alleged that [the company] disclosed Plaintiffs had been its patients, which is not medical information protected by CMIA.” Id. at *8. Thus, the Third Circuit held that the district court properly dismissed the CMIA claim. Id. at *9.

Implications For Companies

Cole offers strong precedent for any company defending adtech class action claims (1) brought under CIPA’s eavesdropping provision where the third-party adtech company directly receives the information from users’ browsers and (2) brought under the CMIA where the alleged disclosure merely shows that a person was a patient, without revealing any substantive information about the person’s medical condition or test results.

The latter point continues to appear across adtech class actions.  Just as the plaintiffs in Cole failed to plausibly allege the disclosure of substantive medical information,  courts have dismissed similar claims where plaintiffs allege disclosure of protected health information (“PHI”) without actually identifying what PHI was supposedly shared (as we blogged about here).  These decisions reinforce that adtech plaintiffs must identify the specific medical information allegedly disclosed to plausibly plead claims under the CMIA or for invasion of privacy.

Data Security And Privacy Liability – Takeaways From The Sedona Conference Working Group 11 Midyear Meeting In Ft. Lauderdale

By Justin R. Donoho

Duane Morris TakeawaysData privacy and data breach class action litigation continue to explode.  At the Sedona Conference Working Group 11 on Data Security and Privacy Liability, in Fort Lauderdale, Florida, on November 6-7, 2025, Justin Donoho of the Duane Morris Class Action Defense Group served as a moderator for a panel discussion, “Legislative Drafting Considerations: Lessons from Colorado’s Privacy and AI Law Intersection.”  The working group meeting, which spanned two days and had over 40 participants, produced excellent dialogues on this topic and others including website advertising technologies, judicial perspectives on privacy and data breach litigation, onward transfer of consumer PII in M&A and bankruptcy contexts, venue, forum, and choice of law in privacy and data breach class actions, privacy and data security regulator roundtable, revisiting notice and consent for facial recognition, and application of attorney-client privilege in the cybersecurity context.

The Conference’s robust agenda featured dialogue leaders from a wide array of backgrounds, including government officials, industry experts, federal and state judges, in-house attorneys, cyber and data privacy law professors, plaintiffs’ attorneys, and defense attorneys.  In a masterful way, the agenda provided valuable insights for participants toward this working group’s mission, which is to identify and comment on trends in data security and privacy law, in an effort to help organizations prepare for and respond to data breaches, and to assist attorneys and judicial officers in resolving questions of legal liability and damages.

Justin had the privilege of speaking about lessons from the intersection of the Colorado Privacy Act (CPA) and Colorado AI Act (CAIA) and how these lessons might guide future legislatures when drafting AI and data privacy statutes.  Highlights from his presentation included identifying lessons learned from the intersection of the CPA and CAIA and, among them, discussing some of the human steps a company may perform in using an AI hiring tool to avoid triggering the CPA’s opt-out right in factual scenarios where that right might apply, as those human steps are discussed in his article, “Five Human Best Practices to Mitigate the Risk of AI Hiring Tool Noncompliance with Antidiscrimination Statutes,” Journal of Robotics, Artificial Intelligence & Law, Volume 8, No. 4, July-August 2025.

Finally, one of the greatest joys of participating in Sedona Conference meetings is the opportunity to draw on the wisdom of fellow presenters and other participants from around the globe.  Highlights included:

  1. Experts of all stripes presenting a draft opus on advertising technologies that describes ways our laws could move beyond outdated statutes with draconian statutory penalties by focusing instead on any actual harms resulting from such technologies.
  2. A lively dialogue among my panelists and other participants dissecting the Colorado Privacy Act, Colorado AI Act, and those statutes’ application to AI hiring tools in an effort to offer guidance to future legislators drafting similar statutes.
  3. Federal and state judges offering tips for advocacy when presenting technical cybersecurity and data privacy issues to the court.
  4. Panelists with different backgrounds discussing the law regarding when a company that has obtained personal data with consent can and cannot transfer the data in M&A and bankruptcy contexts.
  5. Litigators from both sides of the “v.” debating venue, forum, choice of law, MDL, and CAFA issues in the context of privacy and data breach class actions.
  6. State regulators discussing their increasing data privacy and cybersecurity departments and priorities for enforcement in these areas. 
  7. Data privacy lawyers and experts discussing the evolution of facial recognition technology and the need to tailor notice and consent processes to risks associated with the technologies and use cases involved.
  8. Cybersecurity lawyers and experts discussing best practices for maintaining attorney-client privilege when responding to a cybersecurity incident.

Thank you to the Sedona Conference Working Group 11 and its incredible team, the fellow dialogue leaders, the engaging participants, and all others who helped make this meeting in Fort Lauderdale, Florida, an informative and unforgettable experience.

For more information on the Duane Morris Class Action Group, including its Data Privacy Class Action Review e-book, and Data Breach Class Action Review e-book, please click the links here and here.

California Federal Court Dismisses Adtech Class Action For Failure To Specify Highly Offensive Invasion Of Privacy

By Gerald L. Maatman, Jr., Justin R. Donoho, Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  On October 30, 2025, in DellaSalla, et al. v. Samba TV, Inc., 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025), Judge Jacqueline Scott Corley of the U.S. District Court for the Northern District of California dismissed a complaint brought by TV viewers against a TV technology company alleging that the company’s provision of advertising technology in the plaintiffs’ smart TVs committed the common law tort of invasion of privacy and violated the Video Privacy Protection Act (“VPPA”), the California Invasion of Privacy Act (“CIPA”), and California’s Comprehensive Computer Data Access and Fraud Act (“CDAFA”).  The ruling is significant as it shows that in the hundreds of adtech class actions across the nation alleging that adtech violates privacy laws, plaintiffs do not plausibly state a common law claim for invasion of privacy unless they specify in the complaint the information allegedly disclosed and explain how such a disclosure was highly offensive.  The case is also significant in that it shows that the VPPA does not apply to video analytics companies, and that California privacy statutes do not apply extraterritorially to plaintiffs located outside California.

Background

This case is one of a legion of class actions that plaintiffs have filed nationwide alleging that third-party technology captured plaintiffs’ information and used it to facilitate targeted advertising. 

This software, often called advertising technologies or “adtech,” is a common feature of millions of consumer products and websites in operation today.  In adtech class actions, the key issue is often a claim brought under a federal or state wiretap act, a consumer fraud act, or the VPPA, because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including retailers, consumer products, universities, and the adtech companies themselves.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided. 

In DellaSalla, the plaintiffs brought suit against a TV technology company that embedded a chip with analytics software in plaintiffs’ smart TVs.  Id. at *1, 5.  According to the plaintiffs, the company intercepted the plaintiffs’ “private video-viewing data in real time, including what [t]he[y] watched on cable television and streaming services,” and tied this information to each plaintiff’s unique anonymized identifier in order to “facilitate targeted advertising,” all allegedly without the plaintiffs’ consent.  Id. at *1.  Based on these allegations, the plaintiffs claimed that the TV technology company violated the CIPA, CDAFA, and VPPA, and committed the common-law tort of invasion of privacy. 

The company moved to dismiss, arguing that the CIPA and CDAFA did not apply because the plaintiffs were located outside California, that the VPPA did not apply because the TV technology company was not a “video tape service provider,” and that the plaintiffs failed to plausibly allege a highly offensive violation of a privacy interest.

The Court’s Decision

The Court agreed with the TV technology company and dismissed the complaint in its entirety, with leave to amend any existing claims but not to add any additional claims without further leave.

On the CIPA and CDAFA claims, the Court found that the plaintiffs did not allege that any unlawful conduct occurred in California.  Instead, the plaintiffs alleged that the challenged conduct occurred in their home states of North Carolina and Oklahoma.  Id. at *1, 3-4.  For these reasons, the Court dismissed the CIPA and CDAFA claims, finding that these statutes do not apply extraterritorially.  Id.

On the VPPA claim, the Court addressed the VPPA’s definition of  “video tape service provider,” which is “any person, engaged in the business … of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.”  Id. at *5.  The plaintiffs argued that the TV technology company was a video tape service provider “because its technology is incorporated in Smart TVs, which deliver prerecorded videos.  [The defendant] advertises its technology precisely as providing a ‘better viewing experience’ ‘immersive on-screen experiences’ and a ‘more tailored ad experience’ through its technology.”  Id.  The Court rejected this argument. It held that “[t]his allegation does not plausibly support an inference, [the defendant]—an analytics software provider—facilitated the exchange of a video product. Rather, the allegations support an inference [the defendant] collected information about Plaintiffs’ use of a video product, but not that it provided the product itself.”  Id. (emphasis added).

On the common law claim for invasion of privacy, the TV technology company argued that this claim failed because the plaintiffs “have no expectation of privacy in the information it collects and Plaintiffs have not alleged a highly offensive intrusion.”  In examining this argument, the Court noted that Plaintiff had only provided “vague references” to the information supposedly intercepted.  Id. at *4.  This information included video-viewing data generally (none specified) tied to an anonymized identifier.  Id. at *1, 5.  Thus, the Court agreed with the defendant’s argument and found that plaintiffs identified “no embarrassing, invasive, or otherwise private information collected” and no explanation of how the tracking of video viewing history with an anonymized ID caused plaintiffs “to experience any kind of harm that is remotely similar to the ‘highly offensive’ inferences or disclosures that were actionable at common law.”  Id. at *5.  In sum, the Court concluded that “Plaintiffs have not plausibly alleged a highly offensive violation of a privacy interest.”

Implications For Companies

DellaSala provides powerful precedent for any company opposing adtech class action claims (1) brought under statutes enacted in states other than the plaintiffs’ place of residence; (2) brought under the federal VPPA where the company allegedly transmitted video usage information, as opposed to any videos themselves; and (3) alleging common-law invasion of privacy, where the plaintiffs have not specified the information disclosed and why such a disclosure is highly offensive. 

The last point is a recurring theme in adtech class actions.  Just as this plaintiff suing a TV technology company did not plausibly state a common-law claim for invasion of privacy without identifying the videos watched and any highly offensive harm in associating those videos with an anonymized ID, so did a plaintiff not plausibly state a claim for invasion of privacy by way of alleging adtech’s disclosure of protected health information (“PHI”), without specifying the PHI allegedly disclosed (as we blogged about here).  These cases show that for adtech plaintiffs to plausibly plead claims for invasion of privacy, they at least need to identify what allegedly private information was disclosed and explain how the alleged disclosure was highly offensive.

New York Federal Court’s OpenAI Discovery Orders Provide Key Insights For Companies Navigating AI Preservation Standards

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: In a series of discovery rulings in the case of In Re OpenAI, Inc. Copyright Infringement Litigation, No. 23 Civ. 11195 (S.D.N.Y.), Magistrate Judge Ona T. Wang issued a series of orders that signal how courts are likely to approach AI data, privacy, and discovery obligations. Judge Wang’s orders illustrate the growing tension between AI system transparency and data privacy compliance – and how courts are trying to balance them.

For companies that develop or use AI, these rulings highlight both the risk of expansive preservation demands and the opportunity to share proportional, privacy-conscious discovery frameworks. Below is an overview of these decisions and the takeaways for in-house counsel, privacy officers, and litigation teams.

Background

In May 2025, the U.S. District Court for the Southern District of New York issued a preservation order in a copyright action challenging the use of The New York Times’ content to train large language models. The order required OpenAI to preserve and segregate certain output log data that would otherwise be deleted. Days later, the Court denied OpenAI’s motion to reconsider or narrow that directive. By October 2025, however, the Court approved a negotiated modification that terminated OpenAI’s ongoing preservation obligations while requiring continued retention of the already-segregated data.

The Court’s Core Rulings

  1. Forward-Looking Preservation Now, Arguments Later

On May 13, 2025, the Court entered an order requiring OpenAI to preserve and segregate output log data that would otherwise be deleted, including data subject to user deletion requests or statutory erasure rights. See id., ECF No. 551. The rationale: once litigation begins, even transient data can be critical to issues like bias and representativeness. The Court stressed that it was too early to weigh proportionality, so preservation would continue until a fuller record emerged.

  1. Reconsideration Denied, Preservation Continues

A few days later, when OpenAI sought reconsideration or modification of preservation order, the Court denied the request without prejudice. Id., ECF No. 559. The Court noted that it was premature to decide proportionality and potential sampling bias until additional information was developed.

  1. A Negotiated “Sunset” and Privacy Carve-Outs

By October 2025, the parties agreed to wind down the broad preservation obligation. On October 9, 2025, the Court approved a stipulated modification that ended OpenAI’s ongoing preservation duty as of September 26, 2025, limited retention to already-segregated logs, excluded requests originating from the European Economic Area, Switzerland, and the United Kingdom for privacy compliance, and added targeted, domain-based preservation for select accounts listed in an appendix. Id., ECF No. 922.

This evolution — from blanket to targeted, time-limited preservation — shows courts’ willingness to adapt when parties document technical feasibility, privacy conflicts, and litigation need.

Implications For Companies

  1. Evidence vs. Privacy: Courts Expect You to Reconcile Both

These rulings show that courts will not accept “privacy law conflicts” as a stand-alone excuse to delete potentially relevant data. Instead, companies must show they can segregate, anonymize, or retain data while maintaining compliance. The OpenAI orders make clear: when evidence may be lost, segregation beats destruction.

  1. Proportionality Still Matters

Even as courts push for preservation, they remain attentive to proportionality. While early preservation orders may seem sweeping, judges are open to refining them once the factual record matures. Companies that track the cost, burden, and privacy impact of compliance will be best positioned to negotiate tailored limits.

  1. Preservation Is Not Forever

The October 2025 stipulation illustrates how to exit an indefinite obligation: offer targeted cohorts, geographic exclusions, and sunset provisions supported by a concrete record. Courts will listen if you bring data, not just arguments.

A Playbook for In-House Counsel

  1. Map Your AI Data Universe

Inventory all AI-related data exhaust: prompts, outputs, embeddings, telemetry, and retention settings. Identify controllers, processors, and jurisdictions.

  1. Build “Pause” Controls

Design systems capable of segregating or pausing deletion by user, region, or product line. This technical agility is key when a preservation order issues.

  1. Update Litigation Hold Templates for AI

Traditional holds miss ephemeral or system-generated data. Draft holds that instruct teams how to pause automated deletion while complying with privacy statutes.

  1. Propose Targeted Solutions

When facing broad discovery demands, offer alternatives: limit by time window, geography, or user cohort. Courts will accept reasonable, well-documented compromises.

  1. Build Toward an Off-Ramp

Preservation obligations can sunset — but only if supported by metrics. Track preserved volumes, costs, and privacy burdens to justify targeted, defensible limits.

Conclusion

The OpenAI orders reflect a new judicial mindset: preserve broadly first, negotiate smartly later. AI developers and data-driven businesses should expect similar directives in future litigation. Those that engineer for preservation flexibility, document privacy compliance, and proactively negotiate scope will avoid the steep costs of one-size-fits-all discovery — and may even help set the industry standard for balanced AI litigation governance.

California Federal Court Narrows CIPA “In-Transit” Liability for Common Website Advertising Technology and Urges Legislature to Modernize Privacy Law

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 17, 2025, in Doe v. Eating Recovery Center LLC, No. 23-CV-05561, ECF 167 (N.D. Cal. Oct. 17, 2025), Judge Vince Chhabria of the U.S. District Court for the Northern District of California granted summary judgment to Eating Recovery Center, finding no violation of the California Invasion of Privacy Act (CIPA) where the Meta Pixel collected website event data. Specifically, the Court held that Meta did not “read” those contents while the communications were “in transit.” In so holding, the Court applied the rule of lenity, construed CIPA narrowly, and urged the California Legislature “to step up” and modernize the statute for the digital age. Id. at 2.

This decision is significant because Judge Chhabria candidly described CIPA as “a total mess,” noting it is often “borderline impossible” to determine whether the law – enacted in 1967 to criminalize wiretapping and eavesdropping on confidential communications – applies to modern internet transmissions. Id. at 1. As the Court observed, CIPA “was a mess from the get-go, but the mess gets bigger and bigger as the world continues to change and as courts are called upon to apply CIPA’s already-obtuse language to new technologies.” Id.  This is a “must read” decision for corporate counsel dealing with privacy issues and litigation.

Background

This class action arose after plaintiff, Jane Doe, visited Eating Recovery Center’s (ERC) website to research anorexia treatment and later received targeted advertisements. Plaintiff alleged that ERC’s use of the Meta Pixel caused Meta to receive sensitive URL and event data from her interactions with ERC’s site, resulting in targeted ads related to eating disorders.

ERC had installed the standard Meta Pixel on its website, which automatically collected page URLs, time on page, referrer paths, and certain click events to help ERC build custom audiences for advertising. Id. at 3. Plaintiff alleged that ERC’s use of the Pixel allowed Meta to intercept her communications in violation of CIPA, Cal. Penal Code § 631(a). She also brought claims under the California Medical Information Act (CMIA), the California Unfair Competition Law (UCL), and for common law unjust enrichment. The UCL claim was dismissed at the pleading stage.

ERC later moved for summary judgment on the remaining CIPA, CMIA, and unjust enrichment claims. In a separate order, the Court granted summary judgment on the CMIA and unjust enrichment claims, finding that plaintiff was not a “patient” under the CMIA and that there was no evidence ERC had been unjustly enriched. See id., ECF 168 at 1-2.

The Court’s Decision

With respect to the CIPA claim, the parties disputed two elements under CIPA § 631(a): (1) whether the event data obtained by Meta constituted “contents” of plaintiff’s communication with ERC, and (2) whether Meta read, attempted to read, or attempted to learn those contents while they were “in transit.” ECF 167 at 6.

The Court first held that URLs and event data can constitute the “contents” of a communication because they can reveal substantive information about a user’s activities – such as researching medical treatment. Id. at 7. The court thus deviated from other courts that have held differently on this particular issue when considering additional facts or allegations not addressed by this court (such as encryption, and inability to reasonably identify the data among lines of code).  However, the Court concluded that Meta did not read or attempt to learn any contents while the communications were “in transit.” Instead, Meta processed the data only after it had reached its intended recipient (i.e., ERC, the website operator).

In reaching that conclusion, Judge Chhabria relied on undisputed testimony about Meta’s internal filtering processes: “Meta’s corporate representative testified that, before logging the data that it obtains from websites, Meta filters URLs to remove information that it does not wish to store (including information that Meta views as privacy protected).” Id. at 8.

This evidence supported the finding that Meta’s conduct involved post-receipt filtering rather than contemporaneous “reading” or “learning.” Id. at 9. The Court emphasized that expanding “in transit” to include post-receipt processing would improperly criminalize routine website analytics practices. Because CIPA is both a criminal statute and a source of punitive civil penalties, the Court applied the rule of lenity to adopt a narrow interpretation. Id. at 11-12. The Court further cautioned that an overly broad reading would render CIPA’s related provision (§ 632, prohibiting eavesdropping and recording) largely redundant. Id. at 10.

Finding that Meta did not read, attempt to read, or attempt to learn the contents of Doe’s communications while they were in transit, the court granted summary judgment to ERC on the CIPA claim. Id. at 12.

The opinion concluded by reiterating that California’s decades-old wiretap law is “virtually impossible to apply [] to the online world,” urging the Legislature to “go back to the drawing board on CIPA,” and suggesting that it “would probably be best to erase the board entirely and start writing something new.” Id.

Implications For Companies

The Doe decision narrows one significant avenue for CIPA liability, particularly for routine use of website analytics and advertising pixels. The Northern District of California has now drawn a distinction between data “read” while in transit and data processed after receipt, significantly reducing immediate CIPA exposure for standard web advertising tools.

At the same time, the court’s reasoning underscores that pixel-captured data may be considered by some courts as “contents” of a communication under CIPA, although there is a split of authority on this issue. Companies could therefore face potential exposure under other California privacy statutes, including the CMIA, the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA), depending on the data involved and how it is used.

Organizations should continue to inventory the data they share through advertising technologies, minimize sensitive information in URLs, and ensure clear and accurate privacy disclosures. Because the court expressly invited legislative reform, companies should also monitor ongoing case law and potential statutory amendments.

Ultimately, Doe v. Eating Recovery Center reflects a pragmatic narrowing of CIPA’s “in transit” requirement while reaffirming that CIPA was not intended to cover common website advertising technologies or, in any event, should not be interpreted as such given the harsh statutory penalties involved and the rule of lenity — like the Supreme Judicial Court of Massachusetts concluded regarding Massachusetts’ wiretap act, as we previously blogged about here.  While this case is a big win for website operators, companies relying on third-party analytics should treat this decision as guidance—not immunity—and continue adopting privacy-by-design principles in their data collection and vendor management practices.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress