Executive Order Signals A Push Toward A Single, Federal “AI Rulebook” And A Retreat From The State Patchwork

By Gerald L. Maatman, Jr., Justin R. Donoho, and Hayley Ryan

Duane Morris Takeaways:  On December 11, 2025, President Donald J. Trump signed Executive Order 14365 titled “Ensuring a National Policy Framework for Artificial Intelligence.” The Order targets what it characterizes as a “patchwork” of State-by-State AI regulation and directs federal agencies to pursue a more uniform, national framework. Rather than serving as a technical AI governance roadmap, the Order focuses on limiting State AI laws through federal funding leverage, potential preemption, and expanded use of FTC enforcement authority. The discussion below highlights the Order’s core objectives and key implications for companies and employers. The Executive Order is required reading for any organizations deploying AI or thinking of doing so.

The Executive Order’s Core Objectives

Reduce State AI Regulation By Framing It As A Competitiveness Problem

The Order emphasizes U.S. leadership in artificial intelligence and asserts that divergent State regulatory regimes increase compliance costs, especially for startups, and may impede innovation and deployment. It also raises concerns that certain State approaches could pressure companies to embed “ideological” requirements into AI systems.

Create Leverage Through Federal Funding: BEAD Broadband Money As The “Carrot And Stick”

Within 90 days, the Secretary of Commerce is directed to issue a policy notice describing the circumstances under which States may be deemed ineligible for certain broadband deployment funding under the Broadband Equity Access and Deployment (BEAD) program if they impose specified AI-related requirements. The notice is also intended to explain how fragmented State AI laws could undermine broadband deployment and high-speed connectivity goals.

Move Toward A Federal Reporting And Disclosure Standard

Within 90 days after the Order’s State-law “identification” process (discussed below), the Federal Communications Commission (FCC), in consultation with a Special Advisor for AI and Crypto, is instructed to consider whether to initiate a proceeding to adopt a federal reporting and disclosure standard for AI models that would preempt conflicting State requirements.

Use The FTC Act As An Enforcement Anchor And Tee Up Preemption Arguments

Within 90 days, the Federal Trade Commission (FTC) is directed, in consultation with other federal agencies, to issue a policy statement addressing how the FTC Act’s prohibition on unfair or deceptive acts or practices applies to AI models, with the express objective of preempting conflicting State laws.

Establish A Federal AI Litigation Task Force To Challenge State AI Laws

The Executive Order goes beyond policy statements and funding leverage by directing the Attorney General, within 30 days, to establish an AI Litigation Task Force dedicated exclusively to challenging State AI laws that conflict with the Order’s national policy objectives. The Task Force is authorized to pursue constitutional and preemption-based challenges, signaling an intent to bring coordinated, affirmative litigation against State AI regimes.

That enforcement effort is reinforced by a parallel State-law triage process. Within 90 days, the Secretary of Commerce must publish an evaluation identifying “onerous” State AI laws for potential challenge, particularly those that require AI systems to alter truthful outputs or compel disclosures that may implicate First Amendment or other constitutional concerns. Together, these provisions signal an intent to move quickly from policy articulation to test cases aimed at curbing State-level AI regulation.

Implications For Companies

Compliance Strategy May Shift, But Uncertainty Rises First

Although companies may welcome relief from conflicting State AI mandates, the Executive Order is likely to increase near-term uncertainty. Preemption disputes are likely, and the Order directs agency action rather than establishing a comprehensive statutory framework. Companies should avoid scaling back State-law compliance prematurely and should assume any federal override will be contested until resolved through rulemaking and litigation.

Class Action Exposure Will Shift, Not Disappear

Even if State AI laws are narrowed, plaintiffs’ lawyers are likely to pursue claims under more traditional theories, including consumer protection (particularly AI marketing and disclosure claims), employment discrimination, privacy and biometrics statutes, and contract or misrepresentation theories. The Order’s emphasis on FTC unfair and deceptive practices enforcement suggests that federal consumer protection standards may become the new focal point for both regulatory scrutiny and follow-on civil litigation.

Employment Risk Remains

Employers should expect ongoing scrutiny of AI use in hiring, promotion, and performance management, including disparate impact claims, vendor-liability arguments, and discovery disputes over model documentation, adverse impact analyses, and validation. Defensible governance, testing, and documentation remain critical.

Federal Contracting And Funding May Come With New AI Representations

If federal agencies adopt standardized AI disclosures, companies operating in regulated industries or participating in broadband initiatives may face new contract provisions governing AI use, along with enhanced reporting and audit obligations.

What Companies Should Do Now

Companies should begin by identifying where and how AI tools are being deployed, particularly in consumer-facing and employment-related contexts, and evaluating those uses under existing disclosure, privacy, and anti-discrimination laws. Public-facing statements about AI capabilities should be reviewed to ensure they are accurate and defensible, as increased regulatory and litigation focus on unfair or deceptive practices is likely to heighten scrutiny of AI-related claims. Companies should also review vendor relationships to confirm that contracts clearly address testing and validation obligations, incident response, audit rights, and appropriate allocation of risk for privacy and discrimination claims. Finally, organizations should remain prepared for continued regulatory change by maintaining State-law compliance readiness while monitoring federal agency actions that may shape a national AI framework.

Bottom Line

This Executive Order is a significant policy signal. The federal government is positioning itself to reduce State-by-State AI regulation and replace it with a framework centered on federal disclosure requirements and consumer protection enforcement. Companies should view the Order as an opportunity to prepare for a likely federal compliance baseline, without assuming State-law exposure will disappear in the near term.

Illinois Supreme Court Imposes Stricter Standing Test For “No-Injury” Class Actions Premised On Statutory Violations

By Gerald L. Maatman, Jr., Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  In Fausett v. Walgreen Co., 2025 IL 131444 (Nov. 20, 2025), the Illinois Supreme Court narrowly construed the private right of action set forth in the federal Fair Credit Reporting Act (FCRA), holding that because the FCRA does not explicitly authorize consumers to sue for violations, the law does not authorize individual lawsuits unless a consumer shows that a violation caused a concrete injury. Thus, at least for FCRA actions, a plaintiff must now allege a “concrete injury” in Illinois state courts similar to what a plaintiff must allege to establish Article III standing in federal courts. This is a significant development, as Illinois courts have not previously required “concrete-injury” allegations for statutory claims under the state’s more liberal standing test.

Fausett is therefore a must-read opinion that represents an obstacle for future plaintiffs pursuing “no-injury” claims premised on the FCRA, in addition to other federal statutes containing similar private rights of action.

Case Background

Plaintiff alleged that Defendant violated the Fair and Accurate Credit Transactions Act (FACTA) – a provision of the FRCA – by printing a receipt containing more than the last five digits of her debit card number. Plaintiff sought statutory damages for the alleged FACTA violation, though she did not claim the violation led to actual harm by, for example, a third party using the receipt to steal her identity.

Plaintiff moved to certify a class of individuals for whom Defendant printed receipts containing more than the last five digits of their payment card numbers. In granting class certification, the trial court rejected Defendant’s argument that Plaintiff had no viable claim due to lack of standing. The trial court reasoned that Illinois courts are not bound by the same jurisdictional restrictions applicable to federal courts and that the Illinois Supreme Court’s decision in Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186, established that “a violation of one’s rights afforded by a statute is itself sufficient for standing.” Fausett, 2025 IL 3237846, ¶ 15. The Illinois Appellate Court affirmed the trial court’s class certification order, and Defendant subsequently appealed to the Illinois Supreme Court.

The Illinois Supreme Court’s Decision

The issue before the Illinois Supreme Court was whether standing existed in Illinois courts for a plaintiff alleging a FACTA violation that did not result in actual harm.

The Court began by distinguishing the standing doctrines applied in Illinois state courts vs. federal courts. The Court observed that Illinois courts are not bound by federal standing law and that Illinois standing principles apply to all claims pending in state court – even those premised on federal statutes.

The Court then identified the two different types of standing that exist in Illinois courts, including: (1) common-law standing, which – like Article III – requires an injury in fact to a legally recognized interest; and (2) statutory standing, which requires the fulfillment of statutory conditions to sue for legislatively created relief. See id. ¶ 39 (for statutory standing, the legislature creates a right of action and determines “who shall sue, and the conditions under which the suit may be brought”) (citation omitted). The Court further noted that a statutory violation, without actual harm, can establish statutory standing only where the statute specifically authorizes a private lawsuit for violations.

Turning to Plaintiff’s FACTA lawsuit, the Court determined that Plaintiff’s claim could not invoke statutory standing because the FCRA’s liability provisions “fail to include standing language. In other words, Congress did not expressly define the parties who have the right to sue for the statutory damages established in FCRA.” Id. ¶ 40; see also id. ¶ 44 (“the plain and unambiguous language” of the FCRA “does not state the consumer or an aggrieved person may file the cause of action”). Thus, because the FCRA is “silent as to who may bring the cause of action for damages,” Plaintiff’s FACTA claim “does not implicate statutory standing principles, and thus common-law standing applies to plaintiff’s suit.” Id.

As for common law standing, the Court concluded that Plaintiff’s claim did not satisfy Illinois’s common law standing test, under which an alleged injury, “whether actual or threatened, must be: (1) distinct and palpable; (2) fairly traceable to the defendant’s actions; and (3) substantially likely to be prevented or redressed by the grant of the requested relief.” Id. ¶ 39 (quoting Petta v. Christie Business Holdings Co., P.C., 2025 IL 130337, ¶ 18). The injury alleged must also be concrete – meaning that a plaintiff alleging only a purely speculative future injury lacks a sufficient interest to have standing.

The Court held that Plaintiff failed to allege or prove a concrete injury because she conceded that she was unaware of any harm to her credit or identity caused by the alleged FACTA violation, and she could not identify anyone who had even seen her receipts “beyond the cashier, herself, and her attorneys.” See id. ¶ 48. Thus, Plaintiff could only show an increased risk of identity theft – something the Court has found to be insufficient to confer standing for a complaint seeking money damages. Because Plaintiff lacked a viable claim due to lack of standing, the Court held that the trial court abused its discretion in granting Plaintiff’s motion for class certification.

Implications Of The Fausett Decision

Fausett will impact FCRA class actions in a significant manner by precluding plaintiffs from bringing certain “no-injury” class actions in Illinois state courts. Federal courts have regularly dismissed such claims for lack of Article III standing based on the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins, 578 U.S. 330 (2016).

Fausett now forecloses plaintiffs from refiling the same claims in Illinois state courts, leaving plaintiffs without a venue to prosecute no-injury FCRA claims in Illinois. Importantly, the Fausett decision will likely reach beyond the FCRA context, as other federal consumer-protection statutes contain liability provisions with private-right-of-action language similar to the language found in the FCRA.

Third Circuit Affirms Dismissal Of CIPA Adtech Class Action Because A Party To A Communication Cannot Eavesdrop On Itself

By Gerald L. Maatman, Jr., Justin R. Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On November 13, 2025, in Cole, et al. v. Quest Diagnostics, Inc., 2025 U.S. App. LEXIS 29698 (3d Cir. Nov. 13, 2025), the U.S. Court of Appeals for the Third Circuit affirmed a ruling of the U.S. District Court for the District of New Jersey’s in dismissing a class action complaint brought by website users against a diagnostic testing company alleging that the company’s use of website advertising technology violated the California Invasion of Privacy Act (“CIPA”) and California’s Confidentiality of Medical Information Act (“CMIA”). 

The ruling is significant because it confirms two important principles: (1) CIPA’s prohibition against eavesdropping does not apply to an online advertising company, like Facebook, when it directly receives information from the users’ browser; and (2) the CMIA is not triggered unless plaintiffs plausibly allege the disclosure of substantive medical information.

Background

This case is one of a legion of nationwide class actions that plaintiffs have filed alleging that third-party technologies (“adtech”) captured user information for targeted advertising. These tools, such as the Facebook Tracking Pixel, are widely used across millions of consumer products and websites.

In these cases, plaintiffs typically assert claims under federal or state eavesdropping statutes, consumer protection laws, or other privacy statutes. Because statutes like CIPA allow $5,000 in statutory damages per violation, plaintiffs frequently seek millions, or even billions, in potential recovery, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $5,000 per claimant, equals a huge amount of damages. While many of these suits initially targeted healthcare providers, plaintiffs have sued companies across nearly every industry, including retailers, consumer products companies, universities, and the adtech companies themselves.

Several of these cases have resulted in multimillion-dollar settlements; others have been dismissed at the pleading stage (as we blogged about here) or at the summary judgment stage (as we blogged about here and here). Still, most remain undecided, and with some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Cole, the plaintiffs alleged that the defendant diagnostic testing company used the Facebook Tracking Pixel on both its general website and its password-protected patient portal.  Id. at *1-2.  According to the plaintiffs, when a user accessed the general website, the Pixel intercepted and transmitted to Facebook “the URL of the page requested, along with the title of the page, keywords associated with the page, and a description of the page.” Id. at *2-3. Likewise, when a user accessed the password-protected website, the Pixel allegedly transmitted the URL “showing, at a minimum, that a patient has received and is accessing test results.” Id. at *3.

Plaintiffs asserted that these transmissions constituted (1) a CIPA violation because the company supposedly aided Facebook in “intercepting” plaintiffs’ internet communications, and (2) a CMIA violation because the company allegedly disclosed URLs associated with webpages plaintiffs accessed to view test results along with plaintiffs’ identifying information linked to users’ Facebook accounts. Id. at *3.

The company moved to dismiss, and, in separate orders, the district court dismissed both claims. See 2024 U.S. Dist. LEXIS 116350; 2025 U.S. Dist. LEXIS 7205.

As to the CIPA claim, the district court found that CIPA “is aimed only at ‘eavesdropping, or the secret monitoring of conversations by third parties,’” and that Facebook was not a third party because it received information directly from plaintiffs’ browsers about webpages they visited. 2025 U.S. Dist. LEXIS 7205, at *7-8 (quoting In Re Google Inc. Cookie Placement Consumer Privacy Litig., 806 F.3d 125, 140-41 (3d Cir. 2015)).  As to the CMIA claim, the district court found that plaintiffs alleged only that the company disclosed that a patient accessed test results but not what kind of medical test was done or what the results were. 2024 U.S. Dist. LEXIS 116350, at *15. Accordingly, the district court held that plaintiffs failed to allege the disclosure of “substantive” medical information as required under the CMIA. Id.

Plaintiffs appealed both rulings.

The Court’s Decision

The Third Circuit affirmed. Id. at *1.

On the CIPA claim, the Third Circuit explained that “[a]s a recipient of a direct communication from Plaintiffs’ browsers, Facebook was a participant in Plaintiffs’ transmissions such that [the company] did not aid or assist Facebook in eavesdropping on or intercepting such communications, even if done without the users’ knowledge.” 2025 U.S. App. LEXIS 29698, at *6.  With no eavesdropping, “Plaintiffs’ CIPA claim was properly dismissed.” Id. at *7.

On the CMIA claim, the Third Circuit explained that “at most, Plaintiffs alleged that [the company] disclosed Plaintiffs had been its patients, which is not medical information protected by CMIA.” Id. at *8. Thus, the Third Circuit held that the district court properly dismissed the CMIA claim. Id. at *9.

Implications For Companies

Cole offers strong precedent for any company defending adtech class action claims (1) brought under CIPA’s eavesdropping provision where the third-party adtech company directly receives the information from users’ browsers and (2) brought under the CMIA where the alleged disclosure merely shows that a person was a patient, without revealing any substantive information about the person’s medical condition or test results.

The latter point continues to appear across adtech class actions.  Just as the plaintiffs in Cole failed to plausibly allege the disclosure of substantive medical information,  courts have dismissed similar claims where plaintiffs allege disclosure of protected health information (“PHI”) without actually identifying what PHI was supposedly shared (as we blogged about here).  These decisions reinforce that adtech plaintiffs must identify the specific medical information allegedly disclosed to plausibly plead claims under the CMIA or for invasion of privacy.

California Federal Court Dismisses Adtech Class Action For Failure To Specify Highly Offensive Invasion Of Privacy

By Gerald L. Maatman, Jr., Justin R. Donoho, Tyler Zmick, and Hayley Ryan

Duane Morris Takeaways:  On October 30, 2025, in DellaSalla, et al. v. Samba TV, Inc., 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025), Judge Jacqueline Scott Corley of the U.S. District Court for the Northern District of California dismissed a complaint brought by TV viewers against a TV technology company alleging that the company’s provision of advertising technology in the plaintiffs’ smart TVs committed the common law tort of invasion of privacy and violated the Video Privacy Protection Act (“VPPA”), the California Invasion of Privacy Act (“CIPA”), and California’s Comprehensive Computer Data Access and Fraud Act (“CDAFA”).  The ruling is significant as it shows that in the hundreds of adtech class actions across the nation alleging that adtech violates privacy laws, plaintiffs do not plausibly state a common law claim for invasion of privacy unless they specify in the complaint the information allegedly disclosed and explain how such a disclosure was highly offensive.  The case is also significant in that it shows that the VPPA does not apply to video analytics companies, and that California privacy statutes do not apply extraterritorially to plaintiffs located outside California.

Background

This case is one of a legion of class actions that plaintiffs have filed nationwide alleging that third-party technology captured plaintiffs’ information and used it to facilitate targeted advertising. 

This software, often called advertising technologies or “adtech,” is a common feature of millions of consumer products and websites in operation today.  In adtech class actions, the key issue is often a claim brought under a federal or state wiretap act, a consumer fraud act, or the VPPA, because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of consumers or website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including retailers, consumer products, universities, and the adtech companies themselves.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided. 

In DellaSalla, the plaintiffs brought suit against a TV technology company that embedded a chip with analytics software in plaintiffs’ smart TVs.  Id. at *1, 5.  According to the plaintiffs, the company intercepted the plaintiffs’ “private video-viewing data in real time, including what [t]he[y] watched on cable television and streaming services,” and tied this information to each plaintiff’s unique anonymized identifier in order to “facilitate targeted advertising,” all allegedly without the plaintiffs’ consent.  Id. at *1.  Based on these allegations, the plaintiffs claimed that the TV technology company violated the CIPA, CDAFA, and VPPA, and committed the common-law tort of invasion of privacy. 

The company moved to dismiss, arguing that the CIPA and CDAFA did not apply because the plaintiffs were located outside California, that the VPPA did not apply because the TV technology company was not a “video tape service provider,” and that the plaintiffs failed to plausibly allege a highly offensive violation of a privacy interest.

The Court’s Decision

The Court agreed with the TV technology company and dismissed the complaint in its entirety, with leave to amend any existing claims but not to add any additional claims without further leave.

On the CIPA and CDAFA claims, the Court found that the plaintiffs did not allege that any unlawful conduct occurred in California.  Instead, the plaintiffs alleged that the challenged conduct occurred in their home states of North Carolina and Oklahoma.  Id. at *1, 3-4.  For these reasons, the Court dismissed the CIPA and CDAFA claims, finding that these statutes do not apply extraterritorially.  Id.

On the VPPA claim, the Court addressed the VPPA’s definition of  “video tape service provider,” which is “any person, engaged in the business … of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.”  Id. at *5.  The plaintiffs argued that the TV technology company was a video tape service provider “because its technology is incorporated in Smart TVs, which deliver prerecorded videos.  [The defendant] advertises its technology precisely as providing a ‘better viewing experience’ ‘immersive on-screen experiences’ and a ‘more tailored ad experience’ through its technology.”  Id.  The Court rejected this argument. It held that “[t]his allegation does not plausibly support an inference, [the defendant]—an analytics software provider—facilitated the exchange of a video product. Rather, the allegations support an inference [the defendant] collected information about Plaintiffs’ use of a video product, but not that it provided the product itself.”  Id. (emphasis added).

On the common law claim for invasion of privacy, the TV technology company argued that this claim failed because the plaintiffs “have no expectation of privacy in the information it collects and Plaintiffs have not alleged a highly offensive intrusion.”  In examining this argument, the Court noted that Plaintiff had only provided “vague references” to the information supposedly intercepted.  Id. at *4.  This information included video-viewing data generally (none specified) tied to an anonymized identifier.  Id. at *1, 5.  Thus, the Court agreed with the defendant’s argument and found that plaintiffs identified “no embarrassing, invasive, or otherwise private information collected” and no explanation of how the tracking of video viewing history with an anonymized ID caused plaintiffs “to experience any kind of harm that is remotely similar to the ‘highly offensive’ inferences or disclosures that were actionable at common law.”  Id. at *5.  In sum, the Court concluded that “Plaintiffs have not plausibly alleged a highly offensive violation of a privacy interest.”

Implications For Companies

DellaSala provides powerful precedent for any company opposing adtech class action claims (1) brought under statutes enacted in states other than the plaintiffs’ place of residence; (2) brought under the federal VPPA where the company allegedly transmitted video usage information, as opposed to any videos themselves; and (3) alleging common-law invasion of privacy, where the plaintiffs have not specified the information disclosed and why such a disclosure is highly offensive. 

The last point is a recurring theme in adtech class actions.  Just as this plaintiff suing a TV technology company did not plausibly state a common-law claim for invasion of privacy without identifying the videos watched and any highly offensive harm in associating those videos with an anonymized ID, so did a plaintiff not plausibly state a claim for invasion of privacy by way of alleging adtech’s disclosure of protected health information (“PHI”), without specifying the PHI allegedly disclosed (as we blogged about here).  These cases show that for adtech plaintiffs to plausibly plead claims for invasion of privacy, they at least need to identify what allegedly private information was disclosed and explain how the alleged disclosure was highly offensive.

New York Federal Court’s OpenAI Discovery Orders Provide Key Insights For Companies Navigating AI Preservation Standards

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: In a series of discovery rulings in the case of In Re OpenAI, Inc. Copyright Infringement Litigation, No. 23 Civ. 11195 (S.D.N.Y.), Magistrate Judge Ona T. Wang issued a series of orders that signal how courts are likely to approach AI data, privacy, and discovery obligations. Judge Wang’s orders illustrate the growing tension between AI system transparency and data privacy compliance – and how courts are trying to balance them.

For companies that develop or use AI, these rulings highlight both the risk of expansive preservation demands and the opportunity to share proportional, privacy-conscious discovery frameworks. Below is an overview of these decisions and the takeaways for in-house counsel, privacy officers, and litigation teams.

Background

In May 2025, the U.S. District Court for the Southern District of New York issued a preservation order in a copyright action challenging the use of The New York Times’ content to train large language models. The order required OpenAI to preserve and segregate certain output log data that would otherwise be deleted. Days later, the Court denied OpenAI’s motion to reconsider or narrow that directive. By October 2025, however, the Court approved a negotiated modification that terminated OpenAI’s ongoing preservation obligations while requiring continued retention of the already-segregated data.

The Court’s Core Rulings

  1. Forward-Looking Preservation Now, Arguments Later

On May 13, 2025, the Court entered an order requiring OpenAI to preserve and segregate output log data that would otherwise be deleted, including data subject to user deletion requests or statutory erasure rights. See id., ECF No. 551. The rationale: once litigation begins, even transient data can be critical to issues like bias and representativeness. The Court stressed that it was too early to weigh proportionality, so preservation would continue until a fuller record emerged.

  1. Reconsideration Denied, Preservation Continues

A few days later, when OpenAI sought reconsideration or modification of preservation order, the Court denied the request without prejudice. Id., ECF No. 559. The Court noted that it was premature to decide proportionality and potential sampling bias until additional information was developed.

  1. A Negotiated “Sunset” and Privacy Carve-Outs

By October 2025, the parties agreed to wind down the broad preservation obligation. On October 9, 2025, the Court approved a stipulated modification that ended OpenAI’s ongoing preservation duty as of September 26, 2025, limited retention to already-segregated logs, excluded requests originating from the European Economic Area, Switzerland, and the United Kingdom for privacy compliance, and added targeted, domain-based preservation for select accounts listed in an appendix. Id., ECF No. 922.

This evolution — from blanket to targeted, time-limited preservation — shows courts’ willingness to adapt when parties document technical feasibility, privacy conflicts, and litigation need.

Implications For Companies

  1. Evidence vs. Privacy: Courts Expect You to Reconcile Both

These rulings show that courts will not accept “privacy law conflicts” as a stand-alone excuse to delete potentially relevant data. Instead, companies must show they can segregate, anonymize, or retain data while maintaining compliance. The OpenAI orders make clear: when evidence may be lost, segregation beats destruction.

  1. Proportionality Still Matters

Even as courts push for preservation, they remain attentive to proportionality. While early preservation orders may seem sweeping, judges are open to refining them once the factual record matures. Companies that track the cost, burden, and privacy impact of compliance will be best positioned to negotiate tailored limits.

  1. Preservation Is Not Forever

The October 2025 stipulation illustrates how to exit an indefinite obligation: offer targeted cohorts, geographic exclusions, and sunset provisions supported by a concrete record. Courts will listen if you bring data, not just arguments.

A Playbook for In-House Counsel

  1. Map Your AI Data Universe

Inventory all AI-related data exhaust: prompts, outputs, embeddings, telemetry, and retention settings. Identify controllers, processors, and jurisdictions.

  1. Build “Pause” Controls

Design systems capable of segregating or pausing deletion by user, region, or product line. This technical agility is key when a preservation order issues.

  1. Update Litigation Hold Templates for AI

Traditional holds miss ephemeral or system-generated data. Draft holds that instruct teams how to pause automated deletion while complying with privacy statutes.

  1. Propose Targeted Solutions

When facing broad discovery demands, offer alternatives: limit by time window, geography, or user cohort. Courts will accept reasonable, well-documented compromises.

  1. Build Toward an Off-Ramp

Preservation obligations can sunset — but only if supported by metrics. Track preserved volumes, costs, and privacy burdens to justify targeted, defensible limits.

Conclusion

The OpenAI orders reflect a new judicial mindset: preserve broadly first, negotiate smartly later. AI developers and data-driven businesses should expect similar directives in future litigation. Those that engineer for preservation flexibility, document privacy compliance, and proactively negotiate scope will avoid the steep costs of one-size-fits-all discovery — and may even help set the industry standard for balanced AI litigation governance.

California Federal Court Narrows CIPA “In-Transit” Liability for Common Website Advertising Technology and Urges Legislature to Modernize Privacy Law

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 17, 2025, in Doe v. Eating Recovery Center LLC, No. 23-CV-05561, ECF 167 (N.D. Cal. Oct. 17, 2025), Judge Vince Chhabria of the U.S. District Court for the Northern District of California granted summary judgment to Eating Recovery Center, finding no violation of the California Invasion of Privacy Act (CIPA) where the Meta Pixel collected website event data. Specifically, the Court held that Meta did not “read” those contents while the communications were “in transit.” In so holding, the Court applied the rule of lenity, construed CIPA narrowly, and urged the California Legislature “to step up” and modernize the statute for the digital age. Id. at 2.

This decision is significant because Judge Chhabria candidly described CIPA as “a total mess,” noting it is often “borderline impossible” to determine whether the law – enacted in 1967 to criminalize wiretapping and eavesdropping on confidential communications – applies to modern internet transmissions. Id. at 1. As the Court observed, CIPA “was a mess from the get-go, but the mess gets bigger and bigger as the world continues to change and as courts are called upon to apply CIPA’s already-obtuse language to new technologies.” Id.  This is a “must read” decision for corporate counsel dealing with privacy issues and litigation.

Background

This class action arose after plaintiff, Jane Doe, visited Eating Recovery Center’s (ERC) website to research anorexia treatment and later received targeted advertisements. Plaintiff alleged that ERC’s use of the Meta Pixel caused Meta to receive sensitive URL and event data from her interactions with ERC’s site, resulting in targeted ads related to eating disorders.

ERC had installed the standard Meta Pixel on its website, which automatically collected page URLs, time on page, referrer paths, and certain click events to help ERC build custom audiences for advertising. Id. at 3. Plaintiff alleged that ERC’s use of the Pixel allowed Meta to intercept her communications in violation of CIPA, Cal. Penal Code § 631(a). She also brought claims under the California Medical Information Act (CMIA), the California Unfair Competition Law (UCL), and for common law unjust enrichment. The UCL claim was dismissed at the pleading stage.

ERC later moved for summary judgment on the remaining CIPA, CMIA, and unjust enrichment claims. In a separate order, the Court granted summary judgment on the CMIA and unjust enrichment claims, finding that plaintiff was not a “patient” under the CMIA and that there was no evidence ERC had been unjustly enriched. See id., ECF 168 at 1-2.

The Court’s Decision

With respect to the CIPA claim, the parties disputed two elements under CIPA § 631(a): (1) whether the event data obtained by Meta constituted “contents” of plaintiff’s communication with ERC, and (2) whether Meta read, attempted to read, or attempted to learn those contents while they were “in transit.” ECF 167 at 6.

The Court first held that URLs and event data can constitute the “contents” of a communication because they can reveal substantive information about a user’s activities – such as researching medical treatment. Id. at 7. The court thus deviated from other courts that have held differently on this particular issue when considering additional facts or allegations not addressed by this court (such as encryption, and inability to reasonably identify the data among lines of code).  However, the Court concluded that Meta did not read or attempt to learn any contents while the communications were “in transit.” Instead, Meta processed the data only after it had reached its intended recipient (i.e., ERC, the website operator).

In reaching that conclusion, Judge Chhabria relied on undisputed testimony about Meta’s internal filtering processes: “Meta’s corporate representative testified that, before logging the data that it obtains from websites, Meta filters URLs to remove information that it does not wish to store (including information that Meta views as privacy protected).” Id. at 8.

This evidence supported the finding that Meta’s conduct involved post-receipt filtering rather than contemporaneous “reading” or “learning.” Id. at 9. The Court emphasized that expanding “in transit” to include post-receipt processing would improperly criminalize routine website analytics practices. Because CIPA is both a criminal statute and a source of punitive civil penalties, the Court applied the rule of lenity to adopt a narrow interpretation. Id. at 11-12. The Court further cautioned that an overly broad reading would render CIPA’s related provision (§ 632, prohibiting eavesdropping and recording) largely redundant. Id. at 10.

Finding that Meta did not read, attempt to read, or attempt to learn the contents of Doe’s communications while they were in transit, the court granted summary judgment to ERC on the CIPA claim. Id. at 12.

The opinion concluded by reiterating that California’s decades-old wiretap law is “virtually impossible to apply [] to the online world,” urging the Legislature to “go back to the drawing board on CIPA,” and suggesting that it “would probably be best to erase the board entirely and start writing something new.” Id.

Implications For Companies

The Doe decision narrows one significant avenue for CIPA liability, particularly for routine use of website analytics and advertising pixels. The Northern District of California has now drawn a distinction between data “read” while in transit and data processed after receipt, significantly reducing immediate CIPA exposure for standard web advertising tools.

At the same time, the court’s reasoning underscores that pixel-captured data may be considered by some courts as “contents” of a communication under CIPA, although there is a split of authority on this issue. Companies could therefore face potential exposure under other California privacy statutes, including the CMIA, the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA), depending on the data involved and how it is used.

Organizations should continue to inventory the data they share through advertising technologies, minimize sensitive information in URLs, and ensure clear and accurate privacy disclosures. Because the court expressly invited legislative reform, companies should also monitor ongoing case law and potential statutory amendments.

Ultimately, Doe v. Eating Recovery Center reflects a pragmatic narrowing of CIPA’s “in transit” requirement while reaffirming that CIPA was not intended to cover common website advertising technologies or, in any event, should not be interpreted as such given the harsh statutory penalties involved and the rule of lenity — like the Supreme Judicial Court of Massachusetts concluded regarding Massachusetts’ wiretap act, as we previously blogged about here.  While this case is a big win for website operators, companies relying on third-party analytics should treat this decision as guidance—not immunity—and continue adopting privacy-by-design principles in their data collection and vendor management practices.

Illinois Federal Court Finds “Self-Inflicted Injury” Insufficient To Confer Article III Standing In Publicity Class Action Lawsuit

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 2, 2025, in Azuz v. Accucom Corp. d/b/a InfoTracer, No. 21-CV-01182, 2025 U.S. Dist. LEXIS 195474 (N.D. Ill. Oct. 2, 2025), Judge LaShonda A. Hunt of the U.S. District Court for the Northern District of Illinois dismissed a class action complaint alleging violations of the Illinois Right of Publicity Act (IRPA). The plaintiff claimed that InfoTracer unlawfully used individuals’ names and likeness to advertise and promote its products without consent. The Court held that the Plaintiff lacked Article III standing because she failed to plausibly allege a concrete injury – her only alleged harm was “self-inflicted,” as no one other than her own counsel ever searched her name on the site.

The decision illustrates that plaintiffs bringing right of publicity claims against website operators must show that a third party actually accessed their information for a commercial purpose. Mere availability of an individual’s information on a website, without evidence of third-party viewing, does not establish a concrete injury under Article III.

Background

Plaintiff Marilyn Azuz filed a putative class action complaint against Accucom Corp. d/b/a InfoTracer, which operates infotracer.com, a website selling personal background reports. She alleged that Accucom used her name and likeness to advertise and promote its products without written consent, in violation of the IRPA. Id. at *2-4. Plaintiff sought damages and injunctive relief barring Accucom from continuing the alleged conduct. Id. at *4.

After three years of litigation and discovery, Accucom moved to dismiss for lack of subject matter jurisdiction, raising a factual challenge to Article III standing. Accucom submitted evidence showing that the only search of Plaintiff’s name on InfoTracer occurred in February 2021, when her own counsel accessed the site after she responded to a Facebook solicitation by her counsel about potential claims. Accucom argued that such a “self-inflicted” search could not establish a concrete injury and that Plaintiff’s claim for injunctive relief was moot because she had since moved to Minnesota and her data had been removed from the site.

Plaintiff countered that her identify being “held out” to be searched constituted a sufficient injury, and that her request for injunctive relief was not moot Accucom could resume the alleged conduct.

The Court’s Decision

The Court sided with Accucom, holding that the Plaintiff failed to establish a concrete injury and therefore lacked standing to pursue her individual claims. Id. at *15.

Relying on the U.S. Supreme Court’s decision in TransUnion LLC v. Ramirez, 594 U.S. 413 (2021), Judge Hunt explained that an intangible statutory violation, without evidence of concrete harm, is insufficient for Article III standing.  Just as inaccurate information in a credit file causes no concrete injury unless disclosed to a third party, the Court concluded, “a person’s identity is not appropriated under the IRPA unless it is used for a commercial purpose.” Id. at *14.

The Court rejected Plaintiff’s reliance on Lukis v. Whitepages Inc., 549 F. Supp. 3d 798 (N.D. Ill. 2021), noting that Lukis involved only a facial attack to standing at the pleading stage, not a factual attack supported by evidence, like here. Id. at *9-10.

Noting that it had not found any post-TransUnion decisions analyzing the IRPA under a factual challenge to standing, Judge Hunt found Fry v. Ancestry.com Operations Inc., 2023 U.S. Dist. LEXIS 50330 (N.D. Ind. Mar. 24, 2023) to be instructive. Id. at *11. In Fry, the court cautioned that a plaintiff asserting a right of publicity claim must ultimately produce evidence showing that his likeness was viewed by someone other than his attorney or their agents. That same “forewarning,” Judge Hunt concluded, applied to Plaintiff, who presented no such evidence. Id. at *12-13.

The Court also dismissed Plaintiff’s request for injunctive relief, holding that any potential future harm was speculative and not sufficiently imminent. Because Plaintiff had relocated to Minnesota, the IRPA’s extraterritorial application could not extend to her circumstances. Id. at *16.

Finally, the Court declined to allow the substitution of new named plaintiffs so that the case could continue, reasoning that because the original plaintiff lacked standing from the outset, the Court never had jurisdiction to allow substitution. Id. at *17.

Implications For Companies

Azuz underscores the importance of scrutinizing Article III standing in every stage of litigation, particularly in statutory publicity and privacy cases. Where plaintiffs cannot show that a third party viewed or interacted with their data, courts are likely to find no concrete injury — and therefore no federal jurisdiction.

Website operators facing IRPA or similar publicity-based class actions should consider asserting factual standing challenges supported by evidence demonstrating the absence of third-party access. Such jurisdictional defenses can be decisive and may be raised at any time in the litigation.

Hospital Defeats Wiretap Adtech Class Action After Texas Federal Court Finds No Knowing Disclosure Of Protected Health Information

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: On September 22, 2025, in Sweat v. Houston Methodist Hospital, No. 24-CV-00775, 2025 U.S. Dist. LEXIS 185310 (S.D. Tex. Sept. 22, 2025), Judge Lee H. Rosenthal of the U.S. District Court for the Southern District of Texas granted a motion for summary judgment in favor of a hospital accused of violating the federal Wiretap Act through its use of website advertising technology. This decision is significant. In the wave of adtech class actions seeking millions – sometimes billions – in statutory damages under the Wiretap Act and similar statutes, the Court held that the Act’s steep penalties (up to $10,000 per violation) were not triggered because the hospital did not knowingly transmit protected health information.

Background

This case is part of a rapidly growing line of class actions alleging that website advertising tools – such as the Meta Pixel, Google Analytics, and other similar website advertising technology, or “adtech,” –secretly capture users’ web-browsing activity and share it with third-party advertising platforms.

Adtech is ubiquitous, embedded on millions of websites. Plaintiffs’ lawyers frequently invoke the federal Wiretap Act, the Video Privacy Protection Act (VPPA), state invasion-of-privacy statutes like the California Invasion of Privacy Act (CIPA), and even the Illinois Genetic Information Privacy Act (GIPA). Their theory is straightforward: multiply hundreds of thousands of website visitors by $10,000 per alleged Wiretap Act violation and the potential damages skyrocket. While some of these class actions have resulted in multi-million-dollar settlements, others have been dismissed (as we blogged about here), and the vast majority remain pending. With some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Sweat, the plaintiffs sued a hospital, seeking to represent a class of patients whose personal health information was allegedly disclosed by the Meta Pixel installed on the hospital’s website. The district court granted the hospital’s motion to dismiss the state law invasion of privacy claim but allowed the Wiretap Act claim to proceed to discovery. The hospital then moved for summary judgment, arguing that the Wiretap Act’s crime-tort exception did not apply because the hospital lacked knowledge that it was disclosing protected health information.

Under the Wiretap Act, “party to the communication” cannot be sued unless it intercepted the communication “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This provision is commonly called the “crime-tort exception.” The plaintiffs pointed to alleged violations of the Health Insurance Portability and Accountability Act (HIPAA) as the predicate crime to trigger this exception.

The Court’s Decision

The Court agreed with the hospital and granted summary judgment, holding that the record contained no evidence that the hospital acted with the “purpose of committing any criminal or tortious act” that would trigger the crime-tort exception. 2025 U.S. Dist. LEXIS 185310, at *13.

As the Court explained, case law authorities have developed two different approaches to determine “purpose” under the crime-tort exception. Some courts use the “independent act” approach, under which the unlawful act must be independent of the interception itself. Other courts have used the “primary purpose” approach, under which the defendant’s primary motivation must be to commit a crime or tort.

Applying the “primary purpose” approach, the Court found “no evidence that [the hospital] acted with the purpose of violating HIPAA…the evidence shows that it did not know it was doing so.” Id. at *13. In so holding, the Court cited to the fact that, although the Pixel was installed on “arguably sensitive portions” of the hospital’s website, the hospital received only aggregated, anonymized data, and there was no proof it knew any protected health information was being disclosed. Id. at *13-14. The Court rejected the plaintiffs’ argument that anonymized aggregate data necessarily originates from identifiable data, emphasizing that Meta’s algorithm could anonymize data “at the input level,” preventing the hospital from receiving identifiable data in the first place. Id. at *16.

Implications For Companies

The Court’s holding in Sweat is a significant win for healthcare providers and other defendants facing adtech class actions. This ruling reinforces two key principles. First, knowledge is critical. Like the Wiretap Act’s HIPAA-based crime-tort exception, similar statutes such as the VPPA require a knowing disclosure of identifiable information. If a defendant lacks knowledge that data is tied to specific individuals, liability should not attach. Second, anonymization matters. Where transmissions are encrypted, anonymized, or otherwise inaccessible at the point of input, there may be no “disclosure” at all.

For example, the VPPA requires disclosure of a person’s specific video-viewing activity, and GIPA requires disclosure of an identified individual’s genetic information. When adtech merely sends anonymized or encrypted data to third-party algorithms—data that cannot be traced back to a specific person—there is no knowing disclosure.

Sweat provides strong authority for defendants to argue that anonymized adtech transmissions cannot satisfy the statutory knowledge requirements of the Wiretap Act’s HIPAA-based crime-tort exception or similarly worded privacy statutes.

New York Federal Court Dismisses Adtech Class Action Because No Ordinary Person Could Identify Web User

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On September 3, 2025, in Golden v. NBCUniversal Media, LLC, No. 22-CV-9858, 2025 WL 2530689 (S.D.N.Y. Sept. 3, 2025), Judge Paul A. Engelmayer of the U.S. District Court for the Southern District of New York granted a motion to dismiss with prejudice for a media company on a claim that the company’s use of website advertising technology on its website violated the Video Privacy Protection Act (“VPPA”).  The ruling is significant as it shows that in the explosion of adtech class actions across the nation seeking millions or billions of dollars in statutory damages under not only the VPPA but also myriad other statutes providing for statutory penalties on similar theories that the website owner disclosed website activities to Facebook, Google, and other advertising agencies, the statute and its harsh penalties should not be triggered because no ordinary person could access and decipher the information transmitted.

Background

This case is one of a multiplying legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing activity and sent it to Meta, Google, and other online advertising agencies.

This software, often called website advertising technology or “adtech,” is a common feature on corporate, governmental, and other websites in operation today.  In adtech class actions, the key issue is often a claim brought under the VPPA, a federal or state wiretap act, a consumer fraud act, and even the Illinois Genetic Information Privacy Act (GIPA), because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they also have filed suits against companies that span nearly every industry including retailers, consumer products, and universities.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, the vast majority remain undecided, and especially with some district courts being more permissive than others in allowing adtech class actions to proceed beyond the motion to dismiss stage (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an alarming rate.

In Golden, the plaintiff brought suit against a media company.  According to the plaintiff, she signed up for an online newsletter offered by the media company and, thereafter, visited the media company’s website, where she watched videos.  Id. at *2-4.  The plaintiff further alleged that, after she watched those videos, her video-watching history was sent to Meta without her permission via the media company’s undisclosed use of the Meta Pixel on its website.  Id.  Like plaintiffs in most adtech class action complaints, this plaintiff: (1) alleged that before the company sent the web-browsing data to the online advertising agency (e.g., Meta), the company encrypted the data via the secure “https” protocol (id., ECF No. 56 ¶ 45); and (2) did not allege that any human had her encrypted web-browsing data or could retrieve it from the advertising agency’s algorithms or that even the advertising agency, or any other entity or person, has her web-browsing data stored or could retrieve it from the advertising agency’s algorithms in a decrypted (readable) format.  Based on the plaintiffs’ allegations, the plaintiff alleged a violation of the VPPA.

The media company moved to dismiss under Rule 12(b)(6), arguing that the media company did not adequately allege that the media company “disclosed” the plaintiff’s “personally identifiable information” (“PII”), defined under the VPPA as “information which identifies a person as having requested or obtained specific video materials or services….”  Id., 2025 WL 2530689, at *5-6.

The Court’s Decision

The Court agreed with the media company and held that the plaintiff failed plausibly to plead any unauthorized “disclosure.” 

As the Court explained, “PII, under the VPPA, has three distinct elements: (1) the consumer’s identity, (2) the video material’s identity, and (3) the connection between them.”  Id. at *6.  Moreover, PII “encompasses information that would allow an ordinary person to identify a consumer’s video-watching habits, but not information that only a sophisticated technology company could use to do so.”  Id. (emphasis in original).  Therefore, “to survive a motion to dismiss, a complaint must plausibly allege that the defendant’s disclosure of information would, with little or no extra effort, permit an ordinary recipient to identify the plaintiff’s video-watching habits.”  Id.  For these reasons, explained the Court, the Second Circuit has “effectively shut the door for Pixel-based VPPA claims.”  Id. at *7 (citing Hughes v. National Football League, 2025 WL 1720295 (2d Cir. June 20, 2025)).

Applying these standards, the Court dismissed the plaintiff’s VPPA claim with prejudice, holding that, “[i]n short, because the alleged disclosure could not be appreciated — decoded to reveal the actual identity of the user, and his or her video selections — by an ordinary person but only by a technology company such as Facebook, it did not amount to PII.”  Id. at *6-7.  In so holding, the Court cited an “emergent line of authority” shutting the door on VPPA claims not only in the Second Circuit but also in other U.S. Courts of Appeal.  See In Re Nickelodeon Consumer Priv. Litig., 827 F.3d 262, 283 (3d Cir. 2016) (affirming dismissal of VPPA case involving the use of Google Analytics, stating, “To an average person, an IP address or a digital code in a cookie file would likely be of little help in trying to identify an actual person”); Eichenberger v. ESPN, Inc., 876 F.3d 979, 986 (9th Cir. 2017) (affirming dismissal of VPPA case because “an ordinary person could not use the information that Defendant allegedly disclosed [a device serial number] to identify an individual”).

Implications For Companies

The Court’s holding in Golden is a win for adtech class action defendants and should be instructive for courts around the country addressing adtech class actions brought under not only the VPPA, but also other statutes prohibiting “disclosures,” and the like.  These statutes should be interpreted similarly to require proof that an ordinary person could access and decipher the web-browsing data, identify the person, and link the person to the data. 

Consider a few examples.  A GIPA claim requires proof of a disclosure or a breach of confidentiality and privilege.  An eavesdropping claim under the California Information of Privacy Act (CIPA) § 632 requires proof of eavesdropping.  A trap and trace claim under CIPA § 638.51 requires proof that the data captured is reasonably likely to identify the source of the data.  A claim under the Electronic Communications Privacy Act (ECPA) requires proof of an interception.

When adtech sends encrypted, inaccessible, anonymized transmissions to the advertising agency’s algorithms, has there been any disclosure or breach of confidentiality and privilege (GIPA), eavesdropping (CIPA § 632), data capture reasonably likely to identify the source (CIPA § 638.51), or interception (ECPA)?  Just as adtech transmissions are insufficient to amount to a disclosure under the VPPA, Golden shows neither should adtech transmissions trigger these similarly worded statutes because no ordinary person could access and decipher the data transmitted.

Illinois Federal Courts Allow Adtech And Edtech ECPA Claims To Proceed, Furthering Split Of Authority

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways:  On August 20, 2025, in Hannant v. Sarah D. Culbertson Memorial Hospital, 2025 WL 2413894 (C.D. Ill. Aug. 20, 2025), Judge Sara Darrow of the U.S. District Court for the Central District of Illinois granted a motion to dismiss while allowing a website user to re-plead her claim that the hospital’s use of website advertising technology (“adtech”) violated the Electronic Communications Privacy Act (“ECPA”).  The same day, in Q.J. v. Powerschool Holdings, LLC, 2025 WL 2410472 (N.D. Ill. Aug. 20, 2025), Judge Jorge Alonso of the U.S. District Court for the Northern District of Illinois denied the Chicago school board and its educational technology (“edtech”) provider’s motion to dismiss a claim that their use of a third-party data analytics tool violated the ECPA.  These rulings are significant in that they show that in the hundreds of adtech, edtech, and other internet-based technology class actions across the nation seeking millions (or billions) in dollars in statutory damages under the ECPA, Illinois Federal courts have distinguished themselves from other courts in other jurisdictions that have refused to interpret the ECPA in such a plaintiff-friendly manner as have the Illinois Federal courts. 

Background

These cases are two of a legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing data and sent it to Meta, Google, and other online advertising agencies and/or data analytics companies.  In these adtech, edtech, and similar class actions, the key issue is often a claim brought under the ECPA on the theory that hundreds of thousands of website visitors times $10,000 per claimant in statutory damages equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including education, retailers, and consumer products.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided.

In Hannant, the plaintiff brought suit against a hospital.  According to the plaintiff, the hospital installed the Meta Pixel on its website, thereby transmitting to Meta, allegedly without the plaintiff’s consent, data about her visit to the hospital’s website. 

In Q.J., the plaintiff brought suit against the Chicago school board and its edtech provider.  According to the plaintiff, the school board and edtech provider installed a third-party data analytics tools called Heap Autocapture on the edtech provider’s online platform, thereby transmitting to Heap, allegedly without consent, information about the students’ visits to the online platform.

In both lawsuits, the plaintiffs claimed that these alleged events amounted to an “interception” by the defendant that violated the ECPA.  Neither defendant contested whether the plaintiff had plausibly alleged an “interception,” even though the events were more like the catching and forwarding of a different ball, not an interception: (1) as alleged in Hannant, see No. 24-CV-4164, ECF No. 14 ¶¶ 49, 363 (alleging that the communication Meta received was not the same transmission but a “duplicate[]” that was “forward[ed]”); and (2) despite the wholly conclusory allegations of a purported “interception” in Q.J.  However, both defendants moved to dismiss the claim under the ECPA on the grounds that, to the extent there was any interception, no liability exists under the ECPA pursuant to its exception where the party does not act “for the purpose of committing any criminal or tortious act.” 18 U.S.C. 2511(2)(d).

The Courts’ Decisions

In Hannant, the Court dismissed the ECPA claim without prejudice, and granted the plaintiff leave to re-plead in a fashion that may allow such an amended complaint to withstand the ECPA claim.  Specifically, the Court found that an amendment might plausibly allege a criminal or tortious purpose by adding sufficient detail about the plaintiff’s website interactions to show that there had been a violation of the Health Insurance Portability and Accountability Act (“HIPAA”), which provides for criminal and civil penalties against a person “who knowingly … discloses individually identifiable health information [(‘IIHI’)] to another person.”  2025 WL 2413894,at *3 (quoting 42 U.S.C. § 1320d-6).  As the Court explained, under adtech class-action precedent in the U.S. District Court for the Northern District of Illinois, adding additional detail regarding alleged transmission of IIHI could be enough to allege a criminal or tortious purpose.  Id. at *3-5.

In Q.C., the Court denied the school board and edtech provider’s motion to dismiss, citing the same plaintiff-friendly precedent in the Northern District of Illinois cited by the opinion in Hannant, and explaining that while the allegedly disclosed data in this educational context did not violate the HIPAA, the plaintiff had plausibly alleged that the transmissions at issue violated the Illinois School Student Records Act (“ISSRA”), 105 ILCS 10/6, and Family Educational Rights and Privacy Act (“FERPA”), 20 U.S.C. § 1232g.  2025 WL 2410472, at *6.

Implications For Companies

In Illinois Federal courts, pixels and cookies are no longer just marketing and educational tools – they are legal risk vectors.  By contrast, other U.S. District Courts ruling on Rule 12(b)(6) motions have found no plausibly alleged interception when an internet-based communication is forwarded as opposed to being intercepted mid-flight, and no plausibly alleged criminal or tortious purpose because the purpose was not to violate any statute but rather to engage in advertising or data analytics.  (See, e.g., our prior blog entry discussing one of these several cases, here.)Website owners facing lawsuits in Illinois District Courts would do well to press such arguments finding success in other jurisdictions in order to preserve them for appeal in the Seventh Circuit, which has yet to rule on these issues.  In addition, other defenses remain, including demonstrating that plaintiffs cannot meet their burden of proof to show any actual disclosure where transmissions of information entered on the website to adtech vendors and data analytics providers such as Meta or Google are encrypted, ephemeral, anonymized, aggregated, and otherwise unviewable and irretrievable by any human and hence not any actual disclosure to a third party.

Corporate counsel seeking to deter ECPA litigation should keep in mind the following best practices (discussed in more detail in our prior blog post, here): (1) add or update arbitration clauses to deter class actions and mitigate the risks of mass arbitration; (2) update website terms of use, data privacy policies, and vendor agreements; and (3) audit and adjust uses of website advertising technologies.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress