Illinois Federal Court Finds “Self-Inflicted Injury” Insufficient To Confer Article III Standing In Publicity Class Action Lawsuit

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways: On October 2, 2025, in Azuz v. Accucom Corp. d/b/a InfoTracer, No. 21-CV-01182, 2025 U.S. Dist. LEXIS 195474 (N.D. Ill. Oct. 2, 2025), Judge LaShonda A. Hunt of the U.S. District Court for the Northern District of Illinois dismissed a class action complaint alleging violations of the Illinois Right of Publicity Act (IRPA). The plaintiff claimed that InfoTracer unlawfully used individuals’ names and likeness to advertise and promote its products without consent. The Court held that the Plaintiff lacked Article III standing because she failed to plausibly allege a concrete injury – her only alleged harm was “self-inflicted,” as no one other than her own counsel ever searched her name on the site.

The decision illustrates that plaintiffs bringing right of publicity claims against website operators must show that a third party actually accessed their information for a commercial purpose. Mere availability of an individual’s information on a website, without evidence of third-party viewing, does not establish a concrete injury under Article III.

Background

Plaintiff Marilyn Azuz filed a putative class action complaint against Accucom Corp. d/b/a InfoTracer, which operates infotracer.com, a website selling personal background reports. She alleged that Accucom used her name and likeness to advertise and promote its products without written consent, in violation of the IRPA. Id. at *2-4. Plaintiff sought damages and injunctive relief barring Accucom from continuing the alleged conduct. Id. at *4.

After three years of litigation and discovery, Accucom moved to dismiss for lack of subject matter jurisdiction, raising a factual challenge to Article III standing. Accucom submitted evidence showing that the only search of Plaintiff’s name on InfoTracer occurred in February 2021, when her own counsel accessed the site after she responded to a Facebook solicitation by her counsel about potential claims. Accucom argued that such a “self-inflicted” search could not establish a concrete injury and that Plaintiff’s claim for injunctive relief was moot because she had since moved to Minnesota and her data had been removed from the site.

Plaintiff countered that her identify being “held out” to be searched constituted a sufficient injury, and that her request for injunctive relief was not moot Accucom could resume the alleged conduct.

The Court’s Decision

The Court sided with Accucom, holding that the Plaintiff failed to establish a concrete injury and therefore lacked standing to pursue her individual claims. Id. at *15.

Relying on the U.S. Supreme Court’s decision in TransUnion LLC v. Ramirez, 594 U.S. 413 (2021), Judge Hunt explained that an intangible statutory violation, without evidence of concrete harm, is insufficient for Article III standing.  Just as inaccurate information in a credit file causes no concrete injury unless disclosed to a third party, the Court concluded, “a person’s identity is not appropriated under the IRPA unless it is used for a commercial purpose.” Id. at *14.

The Court rejected Plaintiff’s reliance on Lukis v. Whitepages Inc., 549 F. Supp. 3d 798 (N.D. Ill. 2021), noting that Lukis involved only a facial attack to standing at the pleading stage, not a factual attack supported by evidence, like here. Id. at *9-10.

Noting that it had not found any post-TransUnion decisions analyzing the IRPA under a factual challenge to standing, Judge Hunt found Fry v. Ancestry.com Operations Inc., 2023 U.S. Dist. LEXIS 50330 (N.D. Ind. Mar. 24, 2023) to be instructive. Id. at *11. In Fry, the court cautioned that a plaintiff asserting a right of publicity claim must ultimately produce evidence showing that his likeness was viewed by someone other than his attorney or their agents. That same “forewarning,” Judge Hunt concluded, applied to Plaintiff, who presented no such evidence. Id. at *12-13.

The Court also dismissed Plaintiff’s request for injunctive relief, holding that any potential future harm was speculative and not sufficiently imminent. Because Plaintiff had relocated to Minnesota, the IRPA’s extraterritorial application could not extend to her circumstances. Id. at *16.

Finally, the Court declined to allow the substitution of new named plaintiffs so that the case could continue, reasoning that because the original plaintiff lacked standing from the outset, the Court never had jurisdiction to allow substitution. Id. at *17.

Implications For Companies

Azuz underscores the importance of scrutinizing Article III standing in every stage of litigation, particularly in statutory publicity and privacy cases. Where plaintiffs cannot show that a third party viewed or interacted with their data, courts are likely to find no concrete injury — and therefore no federal jurisdiction.

Website operators facing IRPA or similar publicity-based class actions should consider asserting factual standing challenges supported by evidence demonstrating the absence of third-party access. Such jurisdictional defenses can be decisive and may be raised at any time in the litigation.

Hospital Defeats Wiretap Adtech Class Action After Texas Federal Court Finds No Knowing Disclosure Of Protected Health Information

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: On September 22, 2025, in Sweat v. Houston Methodist Hospital, No. 24-CV-00775, 2025 U.S. Dist. LEXIS 185310 (S.D. Tex. Sept. 22, 2025), Judge Lee H. Rosenthal of the U.S. District Court for the Southern District of Texas granted a motion for summary judgment in favor of a hospital accused of violating the federal Wiretap Act through its use of website advertising technology. This decision is significant. In the wave of adtech class actions seeking millions – sometimes billions – in statutory damages under the Wiretap Act and similar statutes, the Court held that the Act’s steep penalties (up to $10,000 per violation) were not triggered because the hospital did not knowingly transmit protected health information.

Background

This case is part of a rapidly growing line of class actions alleging that website advertising tools – such as the Meta Pixel, Google Analytics, and other similar website advertising technology, or “adtech,” –secretly capture users’ web-browsing activity and share it with third-party advertising platforms.

Adtech is ubiquitous, embedded on millions of websites. Plaintiffs’ lawyers frequently invoke the federal Wiretap Act, the Video Privacy Protection Act (VPPA), state invasion-of-privacy statutes like the California Invasion of Privacy Act (CIPA), and even the Illinois Genetic Information Privacy Act (GIPA). Their theory is straightforward: multiply hundreds of thousands of website visitors by $10,000 per alleged Wiretap Act violation and the potential damages skyrocket. While some of these class actions have resulted in multi-million-dollar settlements, others have been dismissed (as we blogged about here), and the vast majority remain pending. With some district courts allowing adtech class actions to survive motions to dismiss (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an aggressive pace.

In Sweat, the plaintiffs sued a hospital, seeking to represent a class of patients whose personal health information was allegedly disclosed by the Meta Pixel installed on the hospital’s website. The district court granted the hospital’s motion to dismiss the state law invasion of privacy claim but allowed the Wiretap Act claim to proceed to discovery. The hospital then moved for summary judgment, arguing that the Wiretap Act’s crime-tort exception did not apply because the hospital lacked knowledge that it was disclosing protected health information.

Under the Wiretap Act, “party to the communication” cannot be sued unless it intercepted the communication “for the purpose of committing any criminal or tortious act.” 18 U.S.C. § 2511(2)(d). This provision is commonly called the “crime-tort exception.” The plaintiffs pointed to alleged violations of the Health Insurance Portability and Accountability Act (HIPAA) as the predicate crime to trigger this exception.

The Court’s Decision

The Court agreed with the hospital and granted summary judgment, holding that the record contained no evidence that the hospital acted with the “purpose of committing any criminal or tortious act” that would trigger the crime-tort exception. 2025 U.S. Dist. LEXIS 185310, at *13.

As the Court explained, case law authorities have developed two different approaches to determine “purpose” under the crime-tort exception. Some courts use the “independent act” approach, under which the unlawful act must be independent of the interception itself. Other courts have used the “primary purpose” approach, under which the defendant’s primary motivation must be to commit a crime or tort.

Applying the “primary purpose” approach, the Court found “no evidence that [the hospital] acted with the purpose of violating HIPAA…the evidence shows that it did not know it was doing so.” Id. at *13. In so holding, the Court cited to the fact that, although the Pixel was installed on “arguably sensitive portions” of the hospital’s website, the hospital received only aggregated, anonymized data, and there was no proof it knew any protected health information was being disclosed. Id. at *13-14. The Court rejected the plaintiffs’ argument that anonymized aggregate data necessarily originates from identifiable data, emphasizing that Meta’s algorithm could anonymize data “at the input level,” preventing the hospital from receiving identifiable data in the first place. Id. at *16.

Implications For Companies

The Court’s holding in Sweat is a significant win for healthcare providers and other defendants facing adtech class actions. This ruling reinforces two key principles. First, knowledge is critical. Like the Wiretap Act’s HIPAA-based crime-tort exception, similar statutes such as the VPPA require a knowing disclosure of identifiable information. If a defendant lacks knowledge that data is tied to specific individuals, liability should not attach. Second, anonymization matters. Where transmissions are encrypted, anonymized, or otherwise inaccessible at the point of input, there may be no “disclosure” at all.

For example, the VPPA requires disclosure of a person’s specific video-viewing activity, and GIPA requires disclosure of an identified individual’s genetic information. When adtech merely sends anonymized or encrypted data to third-party algorithms—data that cannot be traced back to a specific person—there is no knowing disclosure.

Sweat provides strong authority for defendants to argue that anonymized adtech transmissions cannot satisfy the statutory knowledge requirements of the Wiretap Act’s HIPAA-based crime-tort exception or similarly worded privacy statutes.

California Adopts New Rules Expanding The FEHA’s Reach To AI Tool Developers

By Gerald L. Maatman, Jr., Justin Donoho, and George J. Schaller

Duane Morris Takeaways: On October 1, 2025, California’s “Employment Regulations Regarding Automated-Decision Systems” will take effect.  These new AI employment regulations can be accessed here.  The regulations add an “agency” theory under the California Fair Employment and Housing Act (FEHA) and formalize this theory’s applicability to AI tool developers and companies employing AI tools that facilitate human decision making for recruitment, hiring, and promotion of job applicants and employees.  With California’s inclusion of a private right of action under the FEHA, these new AI employment regulations may augur an uptick in AI employment tool class actions brought under the FEHA.  This blog post identifies key provisions of this new law and steps employers and AI tool developers can take to mitigate FEHA class action risk.

Background 

In the widely-watched class action captioned Mobley v. Workday, No. 23-CV-770 (N.D. Cal.), the plaintiff alleges that an AI tool developer’s algorithm-based screening tools discriminated against job applicants on the basis of race, age, and disability in violation of Title VII of the Civil Rights Act of 1964 (“Title VII”), the Age Discrimination in Employment Act of 1967 (“ADEA”), the Americans with Disabilities Act Amendments Act of 2008 (“ADA”), and California’s FEHA.  Last year the U.S. District Court for the Northern District of California denied dismissal of the Title VII, ADEA, and ADA disparate impact claims on the theory that the developer of the algorithm was plausibly alleged to be the employer’s agent, and dismissed the FEHA claim which was brought only under the then-available theory of intentional aiding and abetting (as we previously blogged about here).

In recent years, discrimination stemming from AI employment tools has been addressed by other state and local statutes, including Colorado’s AI Act (CAIA) setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination,” New York City’s law regarding the use of automated employment decision tools, the Illinois AI Video Interview Act, and the 2024 amendment to the Illinois Human Rights Act (IHRA) to regulate the use of AI, with only the last of these laws providing for a private right of action (once it becomes effective January 1, 2026).

Key Provisions Of California’s AI Employment Regulations

California’s AI employment regulations amend and clarify how the FEHA applies to AI employment tools, thus constituting a new development in case theories available to class action plaintiffs regarding alleged harms stemming from AI systems and algorithmic discrimination.  

Employers and AI employment tool developers should take note of key provisions codified by California’s new AI employment regulations, as follows:

  • Agency theory.  An “agency” theory is added under the FEHA like the one that allowed the plaintiff in Mobley v. Workday to proceed past a motion to dismiss on his federal claims, whereby an AI tool developer may face litigation risk for developing algorithms that result in a disparate impact when the tool is used by an employer.  While Mobley v. Workday continues to proceed in the trial court, no appellate authority has yet had occasion to address the “agency” theories being litigated in that case under federal antidiscrimination statutes.  However, with the California AI employment regulations taking effect October 1, 2025, that theory is now expressly codified under the FEHA.  2 Cal. Code Regs § 11008(a).
  • Proxies for discrimination.  The regulations clarify that it is unlawful to use an employment tool algorithm that discriminates by using a “proxy,” which the regulations define as a “characteristic or category closely correlated with a basis protected by the Act.”  Id. §§ 11008(a), 11009(f).  While the regulations do not explicitly identify any proxies, proxies that have been identified in literature by the EEOC’s former Chief Analyst include zip code (this proxy is also codified in the IHRA), first name, alma mater, credit history, and participation in hobbies or extracurricular activities.
  • Anti-bias testing.  The regulations state that relevant to a claim of employment discrimination or an available defense are “anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such efforts, the results of such testing or other effort, and the response to the results.”  Id. § 11020(b).  Thus, for example, adoption of the NIST’s AI risk management framework, itself codified as a defense under the CAIA, could be a factor to consider as a defense under the FEHA.  Many other factors are pertinent with respect to anti-bias testing, including auditing, tuning, and the use of various interpretability methods and fairness metrics, discussed in our prior blog entry and article on this subject (here).
  • Data retention.  The regulations provide that employers, employment agencies, labor organizations, and apprenticeship training programs must maintain employment records, including automated-decision data, for a minimum of four years.  Id. § 11013(c).

Implications For Employers

California’s AI employment regulations increase employers’ and AI tool developers’ risks of facing class action lawsuits similar to Mobley v Workday and/or alleging discrimination under the FEHA.  However, developers and employers have several tools at their disposal to mitigate AI employment tool class action risk.  One is to ensure that AI employment tools comply with the FEHA provisions discussed above and with other antidiscrimination statutes.  Others include adding or updating arbitration agreements to mitigate the risks of mass arbitration; collaborating with IT, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and updating notices to third parties and vendor agreements.

Crypto Class Action Key Decisions and Trends in 2025

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Legal Intelligencer by Justin Donoho entitled “Crypto Class Action Key Decisions and Trends in 2025.”  The article is available here and is a must-read for corporate counsel involved with crypto and blockchain technologies.

This year has already been a busy one in the crypto class action litigation landscape.  It has seen several significant court decisions that have continued to shape the law in this growing area, including decisions on dispositive motions regarding whether various crypto transactions are sales of unregistered “securities” and, if so, whether the operator of a crypto exchange may be held liable for such transactions.  Two class certification split decisions were also issued, showing why claims for the sale of unregistered securities remain popular with the plaintiffs’ bar whereas other types of claims increasingly being brought the plaintiff’s bar face significant hurdles to class certification.  There have also been several multimillion-dollar crypto class action settlements.  In addition, dozens of new crypto class action cases have been filed, auguring a continued trend of further development in this area.  This article analyzes these key decisions and trends.

Implications For Corporations

With crypto assets continuing to proliferate and the current presidential administration reducing enforcement priorities relating to sales of crypto assets, crypto class action litigation is multiplying.  We should expect to see an upward trend of key decisions and new cases in the remainder of this year and beyond, as this burgeoning area of the law continues to unfold.

New York Federal Court Dismisses Adtech Class Action Because No Ordinary Person Could Identify Web User

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Ryan Garippo

Duane Morris Takeaways:  On September 3, 2025, in Golden v. NBCUniversal Media, LLC, No. 22-CV-9858, 2025 WL 2530689 (S.D.N.Y. Sept. 3, 2025), Judge Paul A. Engelmayer of the U.S. District Court for the Southern District of New York granted a motion to dismiss with prejudice for a media company on a claim that the company’s use of website advertising technology on its website violated the Video Privacy Protection Act (“VPPA”).  The ruling is significant as it shows that in the explosion of adtech class actions across the nation seeking millions or billions of dollars in statutory damages under not only the VPPA but also myriad other statutes providing for statutory penalties on similar theories that the website owner disclosed website activities to Facebook, Google, and other advertising agencies, the statute and its harsh penalties should not be triggered because no ordinary person could access and decipher the information transmitted.

Background

This case is one of a multiplying legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing activity and sent it to Meta, Google, and other online advertising agencies.

This software, often called website advertising technology or “adtech,” is a common feature on corporate, governmental, and other websites in operation today.  In adtech class actions, the key issue is often a claim brought under the VPPA, a federal or state wiretap act, a consumer fraud act, and even the Illinois Genetic Information Privacy Act (GIPA), because plaintiffs often seek millions (and sometimes even billions) of dollars, even from midsize companies, on the theory that hundreds of thousands of website visitors, times $2,500 per claimant in statutory damages under the VPPA, for example, equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they also have filed suits against companies that span nearly every industry including retailers, consumer products, and universities.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, the vast majority remain undecided, and especially with some district courts being more permissive than others in allowing adtech class actions to proceed beyond the motion to dismiss stage (as we blogged about here), the plaintiffs’ bar continues to file adtech class actions at an alarming rate.

In Golden, the plaintiff brought suit against a media company.  According to the plaintiff, she signed up for an online newsletter offered by the media company and, thereafter, visited the media company’s website, where she watched videos.  Id. at *2-4.  The plaintiff further alleged that, after she watched those videos, her video-watching history was sent to Meta without her permission via the media company’s undisclosed use of the Meta Pixel on its website.  Id.  Like plaintiffs in most adtech class action complaints, this plaintiff: (1) alleged that before the company sent the web-browsing data to the online advertising agency (e.g., Meta), the company encrypted the data via the secure “https” protocol (id., ECF No. 56 ¶ 45); and (2) did not allege that any human had her encrypted web-browsing data or could retrieve it from the advertising agency’s algorithms or that even the advertising agency, or any other entity or person, has her web-browsing data stored or could retrieve it from the advertising agency’s algorithms in a decrypted (readable) format.  Based on the plaintiffs’ allegations, the plaintiff alleged a violation of the VPPA.

The media company moved to dismiss under Rule 12(b)(6), arguing that the media company did not adequately allege that the media company “disclosed” the plaintiff’s “personally identifiable information” (“PII”), defined under the VPPA as “information which identifies a person as having requested or obtained specific video materials or services….”  Id., 2025 WL 2530689, at *5-6.

The Court’s Decision

The Court agreed with the media company and held that the plaintiff failed plausibly to plead any unauthorized “disclosure.” 

As the Court explained, “PII, under the VPPA, has three distinct elements: (1) the consumer’s identity, (2) the video material’s identity, and (3) the connection between them.”  Id. at *6.  Moreover, PII “encompasses information that would allow an ordinary person to identify a consumer’s video-watching habits, but not information that only a sophisticated technology company could use to do so.”  Id. (emphasis in original).  Therefore, “to survive a motion to dismiss, a complaint must plausibly allege that the defendant’s disclosure of information would, with little or no extra effort, permit an ordinary recipient to identify the plaintiff’s video-watching habits.”  Id.  For these reasons, explained the Court, the Second Circuit has “effectively shut the door for Pixel-based VPPA claims.”  Id. at *7 (citing Hughes v. National Football League, 2025 WL 1720295 (2d Cir. June 20, 2025)).

Applying these standards, the Court dismissed the plaintiff’s VPPA claim with prejudice, holding that, “[i]n short, because the alleged disclosure could not be appreciated — decoded to reveal the actual identity of the user, and his or her video selections — by an ordinary person but only by a technology company such as Facebook, it did not amount to PII.”  Id. at *6-7.  In so holding, the Court cited an “emergent line of authority” shutting the door on VPPA claims not only in the Second Circuit but also in other U.S. Courts of Appeal.  See In Re Nickelodeon Consumer Priv. Litig., 827 F.3d 262, 283 (3d Cir. 2016) (affirming dismissal of VPPA case involving the use of Google Analytics, stating, “To an average person, an IP address or a digital code in a cookie file would likely be of little help in trying to identify an actual person”); Eichenberger v. ESPN, Inc., 876 F.3d 979, 986 (9th Cir. 2017) (affirming dismissal of VPPA case because “an ordinary person could not use the information that Defendant allegedly disclosed [a device serial number] to identify an individual”).

Implications For Companies

The Court’s holding in Golden is a win for adtech class action defendants and should be instructive for courts around the country addressing adtech class actions brought under not only the VPPA, but also other statutes prohibiting “disclosures,” and the like.  These statutes should be interpreted similarly to require proof that an ordinary person could access and decipher the web-browsing data, identify the person, and link the person to the data. 

Consider a few examples.  A GIPA claim requires proof of a disclosure or a breach of confidentiality and privilege.  An eavesdropping claim under the California Information of Privacy Act (CIPA) § 632 requires proof of eavesdropping.  A trap and trace claim under CIPA § 638.51 requires proof that the data captured is reasonably likely to identify the source of the data.  A claim under the Electronic Communications Privacy Act (ECPA) requires proof of an interception.

When adtech sends encrypted, inaccessible, anonymized transmissions to the advertising agency’s algorithms, has there been any disclosure or breach of confidentiality and privilege (GIPA), eavesdropping (CIPA § 632), data capture reasonably likely to identify the source (CIPA § 638.51), or interception (ECPA)?  Just as adtech transmissions are insufficient to amount to a disclosure under the VPPA, Golden shows neither should adtech transmissions trigger these similarly worded statutes because no ordinary person could access and decipher the data transmitted.

Best Practices To Mitigate The Risk Of Class Action Litigation Over AI Pricing Tool Noncompliance With Antitrust And AI Statutes

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Ten Design Guidelines to Mitigate the Risk of AI Pricing Tool Noncompliance with the Federal Trade Commission Act, Sherman Act, and Colorado AI Act.”  The article is available here and is a must-read for corporate counsel involved with development or deployment of AI pricing tools.

While artificial intelligence (AI) pricing tools can improve revenues for retailers, suppliers, hotel operators, landlords, ride-hailing platforms, airlines, ticket distributors, and more, designers and deployers of such tools increasingly face the risk of being targeted in lawsuits brought by governmental bodies and class action plaintiffs alleging unfair methods of competition in violation of the Federal Trade Commission (FTC) Act and agreements that restrain trade in violation of the federal Sherman Act.  This article identifies recently emerging trends in such lawsuits, including one currently on appeal in the U.S. Court of Appeals for the Third Circuit and three pending in district courts, draws common threads, and discusses ten guidelines that AI pricing tool designers should consider to mitigate the risk of noncompliance with the FTC Act, the Sherman Act, and Colorado AI Act.

Implications For Corporations

AI pricing tools designed to comply with antitrust and AI laws face fewer risks than those not designed for compliance, of an expensive class action lawsuit or government-initiated proceeding alleging violation of such laws.  Moreover, by enabling and automating informed pricing decisions, AI pricing tools hold the potential to drive market efficiencies.  This article identifies best practices to assist with such compliance and, relatedly, such market efficiencies.

Illinois Federal Courts Allow Adtech And Edtech ECPA Claims To Proceed, Furthering Split Of Authority

By Gerald L. Maatman, Jr., Justin Donoho, Hayley Ryan, and Tyler Zmick

Duane Morris Takeaways:  On August 20, 2025, in Hannant v. Sarah D. Culbertson Memorial Hospital, 2025 WL 2413894 (C.D. Ill. Aug. 20, 2025), Judge Sara Darrow of the U.S. District Court for the Central District of Illinois granted a motion to dismiss while allowing a website user to re-plead her claim that the hospital’s use of website advertising technology (“adtech”) violated the Electronic Communications Privacy Act (“ECPA”).  The same day, in Q.J. v. Powerschool Holdings, LLC, 2025 WL 2410472 (N.D. Ill. Aug. 20, 2025), Judge Jorge Alonso of the U.S. District Court for the Northern District of Illinois denied the Chicago school board and its educational technology (“edtech”) provider’s motion to dismiss a claim that their use of a third-party data analytics tool violated the ECPA.  These rulings are significant in that they show that in the hundreds of adtech, edtech, and other internet-based technology class actions across the nation seeking millions (or billions) in dollars in statutory damages under the ECPA, Illinois Federal courts have distinguished themselves from other courts in other jurisdictions that have refused to interpret the ECPA in such a plaintiff-friendly manner as have the Illinois Federal courts. 

Background

These cases are two of a legion of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing data and sent it to Meta, Google, and other online advertising agencies and/or data analytics companies.  In these adtech, edtech, and similar class actions, the key issue is often a claim brought under the ECPA on the theory that hundreds of thousands of website visitors times $10,000 per claimant in statutory damages equals a huge amount of damages.  Plaintiffs have filed the bulk of these types of lawsuits to date against healthcare providers, but they have filed suits against companies that span nearly every industry including education, retailers, and consumer products.  Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided.

In Hannant, the plaintiff brought suit against a hospital.  According to the plaintiff, the hospital installed the Meta Pixel on its website, thereby transmitting to Meta, allegedly without the plaintiff’s consent, data about her visit to the hospital’s website. 

In Q.J., the plaintiff brought suit against the Chicago school board and its edtech provider.  According to the plaintiff, the school board and edtech provider installed a third-party data analytics tools called Heap Autocapture on the edtech provider’s online platform, thereby transmitting to Heap, allegedly without consent, information about the students’ visits to the online platform.

In both lawsuits, the plaintiffs claimed that these alleged events amounted to an “interception” by the defendant that violated the ECPA.  Neither defendant contested whether the plaintiff had plausibly alleged an “interception,” even though the events were more like the catching and forwarding of a different ball, not an interception: (1) as alleged in Hannant, see No. 24-CV-4164, ECF No. 14 ¶¶ 49, 363 (alleging that the communication Meta received was not the same transmission but a “duplicate[]” that was “forward[ed]”); and (2) despite the wholly conclusory allegations of a purported “interception” in Q.J.  However, both defendants moved to dismiss the claim under the ECPA on the grounds that, to the extent there was any interception, no liability exists under the ECPA pursuant to its exception where the party does not act “for the purpose of committing any criminal or tortious act.” 18 U.S.C. 2511(2)(d).

The Courts’ Decisions

In Hannant, the Court dismissed the ECPA claim without prejudice, and granted the plaintiff leave to re-plead in a fashion that may allow such an amended complaint to withstand the ECPA claim.  Specifically, the Court found that an amendment might plausibly allege a criminal or tortious purpose by adding sufficient detail about the plaintiff’s website interactions to show that there had been a violation of the Health Insurance Portability and Accountability Act (“HIPAA”), which provides for criminal and civil penalties against a person “who knowingly … discloses individually identifiable health information [(‘IIHI’)] to another person.”  2025 WL 2413894,at *3 (quoting 42 U.S.C. § 1320d-6).  As the Court explained, under adtech class-action precedent in the U.S. District Court for the Northern District of Illinois, adding additional detail regarding alleged transmission of IIHI could be enough to allege a criminal or tortious purpose.  Id. at *3-5.

In Q.C., the Court denied the school board and edtech provider’s motion to dismiss, citing the same plaintiff-friendly precedent in the Northern District of Illinois cited by the opinion in Hannant, and explaining that while the allegedly disclosed data in this educational context did not violate the HIPAA, the plaintiff had plausibly alleged that the transmissions at issue violated the Illinois School Student Records Act (“ISSRA”), 105 ILCS 10/6, and Family Educational Rights and Privacy Act (“FERPA”), 20 U.S.C. § 1232g.  2025 WL 2410472, at *6.

Implications For Companies

In Illinois Federal courts, pixels and cookies are no longer just marketing and educational tools – they are legal risk vectors.  By contrast, other U.S. District Courts ruling on Rule 12(b)(6) motions have found no plausibly alleged interception when an internet-based communication is forwarded as opposed to being intercepted mid-flight, and no plausibly alleged criminal or tortious purpose because the purpose was not to violate any statute but rather to engage in advertising or data analytics.  (See, e.g., our prior blog entry discussing one of these several cases, here.)Website owners facing lawsuits in Illinois District Courts would do well to press such arguments finding success in other jurisdictions in order to preserve them for appeal in the Seventh Circuit, which has yet to rule on these issues.  In addition, other defenses remain, including demonstrating that plaintiffs cannot meet their burden of proof to show any actual disclosure where transmissions of information entered on the website to adtech vendors and data analytics providers such as Meta or Google are encrypted, ephemeral, anonymized, aggregated, and otherwise unviewable and irretrievable by any human and hence not any actual disclosure to a third party.

Corporate counsel seeking to deter ECPA litigation should keep in mind the following best practices (discussed in more detail in our prior blog post, here): (1) add or update arbitration clauses to deter class actions and mitigate the risks of mass arbitration; (2) update website terms of use, data privacy policies, and vendor agreements; and (3) audit and adjust uses of website advertising technologies.

Ninth Circuit Affirms Summary Judgment For Defendant On CIPA Claim For Aiding And Abetting Third-Party Software Provider

By Gerald L. Maatman, Jr., Justin Donoho, and Ryan Garippo

Duane Morris Takeaways:  On July 9, 2025, in Gutierrez, et al. v. Converse, Inc., No. 24-4797, 2025 WL 1895315 (9th Cir. July 9, 2025), the Ninth Circuit affirmed that a plaintiff had no evidence from which a reasonable jury could conclude that an online retailer’s use of third-party software to enable a chat feature on its website aided and abetted the third-party vendor in reading or attempting to read the contents of the plaintiff’s chat messages real-time in alleged violation of the California Invasion of Privacy Act (CIPA).  In rejecting this theory, the ruling is significant because it shows that CIPA claims involving alleged disclosures of website activities to third-party software providers cannot survive unless the plaintiff can show that the website owner enabled the third party to read unencrypted, real-time communications. 

Background

This case is one of a legion of class actions that plaintiffs have filed nationwide alleging that third-party software embedded in defendants’ websites secretly captured plaintiffs’ web-browsing activity and sent it to the third-party provider of the software.  Third-party software is a common feature on many websites today and comes in many forms including website advertising technologies (“adtech”), customer relationship management (“CRM”) software, enterprise resource management (“ERP”) software, and, as in this case, communications platforms.

In Gutierrez, Plaintiff brought suit against an online retailer.  According to Plaintiff, the retailer installed a chat feature on its public-facing website and thereby transmitted chat communications entered on the website to Salesforce, a third-party provider of the chat feature to the online retailer in the form of “software as a service” (“SaaS”).  2024 WL 3511648, at *2 (C.D. Cal. July 12, 2024). 

As usual since the Snowden disclosures in 2013, all of these transmissions between the web user, website, and third-party software provider were “were encrypted while in transit.”  Id. at *3.  Moreover, as is true for all internet communications, the chats were transmitted “in different network packets.”  Id.  Thus, the uncontroverted expert evidence showed that “it is ‘virtually impossible’ to learn the contents of an internet communication while it is in transit.”  Id.

The online retailer’s chat data, including chat transcripts, were stored on Salesforce’s servers.  Id.  However, this information was accessible in unencrypted format only through the retailer’s password-protected dashboard.  Id.  Plaintiff offered no evidence to show that Salesforce had access to the retailer’s dashboard or that the retailer ever provided Salesforce access to it.  Id.

Based on these facts, Plaintiff argued that the retailer violated the CIPA by aiding and abetting Salesforce’s wiretapping or attempts to learn her chat communications on the retailer’s website. 

The District Court granted the retailer’s motion for summary judgment for multiple reasons.  First, the District Court found as a matter of law that Salesforce did not violate CIPA’s first clause prohibiting intentional wiretapping or making any unauthorized connection “with any telegraph or telephone wire, line, cable, or instrument” because “Courts have consistently interpreted this clause as applying only to communications over telephones and not through the internet.”  Id. at *6-7. 

Second, the District Court found no genuine dispute of material fact existed as to whether Salesforce had violated the second clause of CIPA, Section 631(a), “because Plaintiff has presented no evidence from which a reasonable jury could conclude Salesforce intercepts messages sent through [the retailer]’s chat feature ‘while … in transit’ or reads or attempts to read or learn the contents of such messages.”  Id. at *7.  As the District Court explained, “uncontroverted evidence establishes messages sent through [the retailer]’s chat feature are encrypted while in transit and, moreover, it is ‘virtually impossible’ to learn the contents of an internet communication while it is in transit because internet communications are transmitted ‘in different network packets[.]’”  Further, the District Court stated that “the fact that a user is redirected to a Salesforce-owned URL upon opening the chat feature on [the retailer]’s website does not establish the user’s messages are sent to Salesforce or Salesforce reads or attempts to read or learn the contents of such messages. Rather, this fact simply establishes . . . the user’s messages are transmitted to [the retailer]’s Service Cloud application.”  Id.  In addition, the District Court explained that “the existence of UUID [Universally Unique Identifier] values attached to chat messages and the mere possibility Salesforce ‘can’ use these values to ‘connect the dots’ between data are insufficient to establish a genuine issue of material fact as to whether Salesforce reads or attempts to read users’ messages while they are in transit.”  Id.

Finally, the District Court found that “because Plaintiff has not established an underlying violation of Section 631(a)’s first or second clause by Salesforce, [the retailer] cannot be liable for aiding and abetting Salesforce.”

The Ninth Circuit’s Opinion

The Ninth Circuit agreed with the retailer. It found that summary judgment for the retailer was warranted and affirmed the order below. 

In a short opinion, the Ninth Circuit affirmed the District Court’s opinion by finding that “no evidence exists from which a reasonable jury could conclude” that Salesforce engaged in wiretapping or attempted to learn Plaintiff’s chat communications on the retailer’s website and, therefore, absent an underlying violation by Salesforce, no aiding and abetting liability by the retailer.  Id., at *1.

Circuit Judge Jay Bybee agreed, filing a separate concurring opinion stating that the wiretapping claim should be affirmed because “the statute, as passed in 1967, focuses on the wiretapping of telegraph or telephone wires—it criminalizes, as relevant here, the wiretapping of a telephone call” and, thus, CIPA’s clause prohibiting wiretapping “does not apply to the internet.”  Id. at *2-3.  Further, Judge Bybee opined: “Until and unless the California appellate courts tell us otherwise, or the California legislature amends § 631(a), I refuse to apply § 631(a)’s first clause to the internet.”  Id. at *3. 

Implications For Companies

The District Court’s holding and Ninth Circuit’s affirmance in Gutierrez are a win for CIPA class action defendants and should be instructive for courts around the country.  In the hundreds of CIPA class actions alleging a defendant’s disclosure of web-browsing activities to an adtech provider, for example, the plaintiff typically does not allege that the adtech provider has any ability to read any unencrypted version of the information disclosed.  This is not surprising, since the largest adtech providers often alleged in CIPA adtech class actions typically encrypt, anonymize, aggregate, and otherwise prevent their own ability to access web users’ browsing activities in any unencrypted format. 

Gutierrez shows that adtech plaintiffs will need to show, however, that the owner of the website they visited enabled the third party adtech provider to read unencrypted, real-time communications, in order to prove their CIPA claims.

Data Security and Privacy Liability – Takeaways From The Sedona Conference Working Group 11 Annual Meeting in Redmond, WA

By Justin R. Donoho

Duane Morris TakeawaysData privacy and data breach class action litigation continue to explode.  At the Sedona Conference Working Group 11 on Data Security and Privacy Liability, at Microsoft’s campus in Redmond, Washington, on May 7, 2025, Justin Donoho of the Duane Morris Class Action Defense Group served as a dialogue leader for two panel discussions, “Individual Liability for Data Security Failures” and “Privacy and Data Security Litigation Update.”  The working group meeting, which spanned two days and had over 50 participants, produced excellent dialogues on these topics and others including AI statutory guidance, shifting U.S. federal regulatory priorities in the privacy and data security landscape, privacy and data security state regulator roundtable, emerging issues and trends in the cyber threat landscape, and law firm data security.

The Conference’s robust agenda featured over 30 dialogue leaders from a wide array of backgrounds, including government officials, data security industry experts, a district court judge, in-house attorneys, cyber and data privacy law professors, plaintiffs’ attorneys, and defense attorneys.  In a masterful way, the agenda provided valuable insights for participants toward this working group’s mission, which is to identify and comment on trends in data security and privacy law, in an effort to help organizations prepare for and respond to data breaches, and to assist attorneys and judicial officers in resolving questions of legal liability and damages.

Justin had the privilege of speaking about current trends in cases seeking individual liability for data security failures and in data privacy class actions.  A few of the highlights from his presentations included discussing the SEC’s case brought against SolarWinds’ CISO Michael Brown, which has CISOs worldwide on the edges of their seats (discussed in Justin’s article here), and two recent cases resulting in helpful precedent for defendants facing cases alleging privacy violations for their uses of website advertising technologies (adtech), including a case that disposed of an adtech class action due to consent by browsewrap (see here), and a case that dismissed an adtech class action due to ambiguities found in a wiretap statute (see here).

Finally, one of the greatest joys of participating in Sedona Conference meetings is the opportunity to draw on the wisdom of fellow presenters and other participants from around the globe.  Highlights included:

  1. A lively dialogue among some of my panelists and other participants regarding trends in decisions regarding Article III standing and the costs and benefits defendants should consider when deciding whether to seek dismissal due to plaintiffs’ lack of Article III standing.
  2. State regulators giving candid advice regarding what and what not to do following data breaches in terms of notifying their offices, participating in investigations, and attempting to negotiate settlements. 
  3. Experts of all stripes dissecting the Colorado Privacy Act, Colorado AI Act, and those statutes’ application to AI hiring tools in an effort to offer guidance to future legislators drafting similar statutes.
  4. Seasoned defense attorneys discussing how federal agencies responsible for rules regarding privacy and data security have responded to the new presidential administration’s “Regulatory Freeze Pending Review” memorandum, the personnel changes, actions, and reviews taken during the first months of the new administration, and the implications for regulated organizations.
  5. Cyber and cyber insurance experts leading a dialogue about emerging risks, regulatory challenges, liability concerns, and underwriting processes relating to cybersecurity.
  6. Law firm consultants addressing current issues with AI that law firms should consider when crafting their cybersecurity assessments, policies, and procedures.

Thank you to the Sedona Conference Working Group 11 and its incredible team, the fellow dialogue leaders, the engaging participants, and all others who helped make this meeting in Redmond, Washington, an informative and unforgettable experience.

For more information on the Duane Morris Class Action Group, including its Data Privacy Class Action Review e-book, and Data Breach Class Action Review e-book, please click the links here and here.

Best Practices To Mitigate The Risk Of AI Hiring Tool Noncompliance With Antidiscrimination Statutes

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Five Human Best Practices to Mitigate the Risk of AI Hiring Tool Noncompliance with Antidiscrimination Statutes.”  The article is available here and is a must-read for corporate counsel involved with development or deployment of AI hiring tools.

While artificial intelligence (AI) hiring tools can improve efficiencies in human resource functions, such as candidate sourcing, resume screening, interviewing, and background checks, AI has not replaced the need for humans to ensure that AI-assisted human resources (HR) practices comply with a wide range of antidiscrimination laws such as Title VII of the Civil Rights Act of 1964 (Title VII), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), the sections of Colorado’s AI Act setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination” (CAI), New York City’s law regarding the use of automated employment decision tools (NYC’s AI Law), the Illinois AI Video Act (IAIVA), and the 2024 amendment to the Illinois Human Rights Act to regulate the use of AI (IHRA).  This article identifies human best practices to mitigate the risk of companies’ AI hiring tools violating the foregoing statutes, according to the statutes, EEOC regulations, and scholarly sources authored by EEOC personnel and leading data scientists.

Implications For Corporations

AI hiring tools designed to comply with antidiscrimination statutes will comply.  Moreover, by eliminating some human decision-making and replacing it with carefully designed algorithms, AI holds the potential to substantially reduce the kind of bias that has been unlawful in the United States since the civil rights movement of the mid-twentieth century.  This article identifies human best practices to assist with such compliance and, relatedly, such potential substantial reduction of bias.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress