Ninth Circuit Dismisses Adtech Class Action For Lack Of Standing

By Gerald L. Maatman, Jr. and Justin Donoho

Duane Morris Takeaways:  On December 17, 2024, in Daghaly, et al. v. Bloomingdales.com, LLC, No. 23-4122, 2024 WL 5134350 (9th Cir. Dec. 17, 2024), the Ninth Circuit ruled that a plaintiff lacked Article III standing to bring her class action complaint alleging that an online retailer’s use of website advertising technology disclosed website visitors’ browsing activities in violation of the California Invasion of Privacy Act and other statutes.  The ruling is significant because it shows that adtech claims cannot be brought in federal court without specifying the plaintiffs’ web browsing activities allegedly disclosed. 

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today.

In Daghaly, Plaintiff brought suit against an online retailer.  According to Plaintiff, the retailer installed the Meta Pixel and other adtech on its public-facing website and thereby transmitted web-browsing information entered by visitors such as which products the visitor clicked on and whether the visitor added the product to his or her shopping cart or wish list.  Id., No. 23-CV-129, ECF No. 1 ¶¶ 44-45.  As for Plaintiff herself, she did not allege what she clicked on or what her web browsing activities entailed upon visiting the website, only that she accessed the website via the web browser on her phone and computer.  Id. ¶ 40.

Based on these allegations, Plaintiff alleged claims for violation of the California Invasion of Privacy Act (CIPA) and other statutes.  The district court dismissed the complaint for lack of personal jurisdiction.  Id., 697 F. Supp. 3d 996 (S.D. Cal. 2023).  Plaintiff appealed and, in its appellate response brief, the retailer argued for the first time that Plaintiff lacked Article III standing.

The Ninth Circuit’s Opinion

The Ninth Circuit agreed with the retailer, found that Plaintiff lacked standing, and remanded for further proceedings.

To allege Article III standing, as is required to bring suit in federal court, the Ninth Circuit opined that a plaintiff must “clearly allege facts demonstrating” that she “suffered an injury in fact that is concrete, particularized, and actual or imminent.”  Id., 2024 WL 5134350, at *2 (citing, e.g., TransUnion LLC v. Ramirez, 594 U.S. 413, 423 (2021)). 

Plaintiff argued that she sufficiently alleged standing via her allegations that she “visited” and “accessed” the website and was “subjected to the interception of her Website Communications.”  Id. at *1.  Moreover, Plaintiff argued, the retailer’s alleged disclosure to adtech companies of the fact of her visiting the retailer’s website sufficiently alleged an invasion of her privacy and thereby invoked Article III standing because the adtech companies could use this fact to stitch together a broader, composite picture of Plaintiffs’ online activities.  See oral argument, here.

The Ninth Circuit rejected these arguments. It found that Plaintiff “does not allege that she herself actually made any communications that could have been intercepted once she had accessed the website. She does not assert, for example, that she made a purchase, entered text, or took any actions other than simply opening the webpage and then closing it.”  Id., 2024 WL 5134350, at *1.As the Ninth Circuit explained during oral argument by way of example, it is not like the Plaintiff had alleged that she was shopping for underwear and that the retailer transmitted information about her underwear purchases.  Moreover, the Ninth Circuit found “no authority suggesting that the fact that she visited [the retailer’s website] (as opposed to information she might have entered while using the website) constitutes ‘contents’ of a communication within the meaning of CIPA Section 631.”  Id.

In short, the Ninth Circuit concluded that Plaintiff lacked Article III standing, and that this conclusion followed from Plaintiff’s failure to sufficiently allege the nature her web browsing activities giving rise to all of her statutory claims.  Id. at *2.  The Ninth Circuit remanded with instructions that the district court grant leave to amend if properly requested. 

Implications For Companies

The holding of Daghaly is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have found that an adtech plaintiff’s failure to identify what allegedly private information allegedly was disclosed via the adtech warrants dismissal under Rule 12(b)(6) for failure to plausibly plead various statutory and common-law claims.  See, e.g, our blog post about such a decision here.   Daghaly shows that adtech plaintiffs also need to identify what allegedly private information beyond the fact of a visit to an online retailer’s website was allegedly disclosed via the adtech, in order to have Article III standing to bring their federal lawsuit in the first place.

Florida Federal Court Refuses To Certify Adtech Class Action

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan K. Norimoto

Duane Morris Takeaways:  On October 1, 2024, Judge Robert Scola of the U.S. District Court for the Southern District of Florida denied class certification in a case involving website advertising technology (“adtech”) in Martinez v. D2C, LLC, 2024 WL 4367406 (S.D. Fla. Oct. 1, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of adtech cannot satisfy Rule 23’s numerosity requirement merely by showing the presence of adtech on a website and numerous visitors to that website.  The Court’s reasoning in denying class certification applies not only in adtech cases raising claims brought under the Video Privacy Protection Act (“VPPA”), like this one, but also to other adtech cases raising a wide variety of other statutory and common law legal theories.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on millions of corporate, governmental, and other websites in operation today.

In Martinez, the plaintiffs brought suit against D2C, LLC d/b/a Univision NOW (“Univision”), an online video-streaming service.  The parties did not dispute, at least for the purposes of class certification, that: (A) Univision installed the Meta Pixel on its video-streaming website; (B) Univision was a “video tape service provider” and the plaintiffs and other Univision subscribers were “consumers” under the VPPA, thereby giving rise to liability under that statute if the plaintiffs could show Univision transmitted their personally identifiable information (PII) such as their Facebook IDs along with the videos they accessed to Meta without their consent; (C) none of the plaintiffs consented; and (D) 35,845 subscribers viewed at least one video on Univision’s website.  Id. at *2. 

The plaintiffs moved for class certification under Rule 23.  The plaintiffs maintained that that at least 17,000 subscribers, including (or in addition to) them, had their PII disclosed to Meta by Univision.  Id. at *3.  The plaintiffs reached this number upon acknowledging “at least two impediments to a subscriber’s viewing information’s being transmitted to Meta: (1) not having a Facebook account; and (2) using a browser that, by default, blocks the Pixel.”  Id. at *6.  Thus, the plaintiffs pointed to “statistics regarding the percentage of people in the United States who have Facebook accounts (68%) and the testimony of their expert … regarding the percentage of the population who use a web browser that would not block the Pixel transmission (70%), to conclude, using ‘basic math,’ that the class would be comprised of ‘at least approximately 17,000 individuals.’” Id. at *6.In contrast, Univision maintained that the plaintiffs failed to carry their burden of showing that even a single subscriber had their PII disclosed, including the three named plaintiffs.  Id. at *3.

The Court’s Decision

The Court agreed with Univision and held that the plaintiffs did not carry their burden of showing numerosity.

First, the Court held that the plaintiffs’ reliance on statistics regarding percentage of people who have Facebook accounts was unhelpful, because “being logged in to Facebook”—not just having an account—“is a prerequisite to the Pixel disclosing information.”  Id. at *7 (emphasis in original).  Moreover, “being simultaneously logged in to Facebook is still not enough to necessarily prompt a Pixel transmission: a subscriber must also have accessed the prerecorded video on Univision’s website through the same web browser and device through which the subscriber (and not another user) was logged into Facebook.”  Id.

Second, the Court held that the plaintiffs’ reliance on their proffer that 70% of people use Google Chrome and Microsoft Edge, which allow Pixel transmission “under default configurations,” failed to account for all of the following “actions a user can take that would also block any Pixel transmission to Meta: enabling a browser’s third-party cookie blockers; setting a browser’s cache to ‘self-destruct’; clearing cookies upon the end of a browser session; and deploying add-on software that blocks third-party cookies.”  Id.

In short, the Court reasoned that the plaintiffs did not establish “the means to make a supported factual finding, that the class to be certified meets the numerosity requirement.”  Id. at *9.  Moreover, the Court found that the plaintiffs had not demonstrated that “any” PII had been disclosed, including their own.  Id. (emphasis in original).In reply, the plaintiffs attempted to introduce evidence supplied by Meta that one of the plaintiffs’ PII had been transmitted to Meta.  Id.  The court refused to consider this new information, supplied for the first time on reply, and further found that even if it were to consider the new evidence, “this only gets the Plaintiffs to one ‘class member.’”  Id. at *10 (emphasis in original).

Finding the plaintiffs’ failure to satisfy the numerosity requirement dispositive, the Court declined to evaluate the other Rule 23 factors.  Id. at *5.

Implications For Companies

This case is a win for defendants of adtech class actions.  In such cases, the Martinez decision can be cited as useful precedent for showing that the numerosity requirement is not met where plaintiffs put forth only speculative evidence as to whether the adtech disclosed plaintiffs’ and alleged class members’ PII to third parties.  The Court’s reasoning in Martinez applies not only in VPPA cases but also other adtech cases alleging claims for invasion of privacy, under state and federal wiretap acts, and more.  All these legal theories have adtech’s transmission of the PII to third parties as a necessary element.  In sum, to establish numerosity, plaintiffs must demonstrate, at a minimum, that class members were logged into their own adtech accounts at the time they visited the defendants’ website, using the same device and browser for the adtech and the visit, using a browser that did not block the transmission by default, and not deploying any number of browser settings and add-on software that would have blocked the transmission.

Georgia Federal Court Dismisses Data Privacy Class Action Against Healthcare Company For Failure To Sufficiently Allege Any Invasion Of Privacy, Damages, Or Wiretap Violation

By Gerald L. Maatman, Jr., Justin Donoho, and Ryan T. Garippo

Duane Morris Takeaways:  On August, 2024, in T.D. v. Piedmont Healthcare, Inc., No. 23-CV-5416 (N.D. Ga. Aug. 24, 2024), Judge Thomas Thrash of the U.S. District Court for the Northern District of Georgia dismissed in its entirety a class action complaint alleging that a healthcare company’s use of website advertising technology installed in its MyChart patient portal disclosed the plaintiffs’ private information in commission of the common law torts of invasion of privacy, breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, and in violation of the Federal Wiretap Act.  The ruling is significant because it shows that such claims cannot surmount Rule 12(b)(6)’s plausibility standard for legal reasons broadly applicable to a wide range of adtech class actions currently on file in many jurisdictions across the nation.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  As the Court explained, “cases like this have sprouted like weeds in recent years.”  Id. at 5.

In Piedmont, Plaintiffs brought suit against Piedmont Healthcare, Inc. (“Piedmont”).  According to Plaintiffs, Piedmont installed the Meta Pixel on its public-facing website and its secure patient portal, and thereby transmitted to Meta Plaintiffs’ “personally identifiable information (PII) and protected health information (PHI) without their consent.” Id. at 1-2.

Based on these allegations, Plaintiffs alleged claims for invasion of privacy, breach of fiduciary duty, negligence, breach of contract, unjust enrichment, and violation of the Electronic Communications Privacy Act (“ECPA”).  Piedmont moved to dismiss under Rule 12(b)(6) for failure to state sufficient facts that, if accepted as true, would state a claim for relief that is plausible on its face.

The Court’s Opinion

The Court agreed with Piedmont and dismissed all of Plaintiffs’ claims.

To state a claim for invasion of privacy, Plaintiffs were required to allege facts sufficient to show “an unreasonable and highly offensive intrusion upon another’s seclusion.”  Id. at 5.  Plaintiffs argued that Piedmont intruded upon their privacy by using the Meta Pixel to secretly transmit their PII and PHI to a third party for commercial gain.  Id. at 4.  Piedmont argued that these allegations failed to plausibly plead an intrusion or actionable intent, or that any intrusion was reasonably offensive or objectionable.  Id.  The Court concluded that “it seems that the weight of authority in similar pixel tracking cases is now solidly in favor of Piedmont’s argument. There is no intrusion upon privacy when a patient voluntarily provides personally identifiable information and protected health information to his or her healthcare provider.”  Id. at 5-6 (collecting cases).  The Court further commented that “it is widely understood that when browsing websites, your behavior may be tracked, studied, shared, and monetized. So it may not come as much of a surprise when you see an online advertisement for fertilizer shortly after searching for information about keeping your lawn green.”  Id. at 3-4.

To state claims for breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, one of the elements a plaintiff much allege is damages or, relatedly, enrichment.  Id. at 7-10.  Plaintiffs argued that they alleged seven categories of damages, as follows: “(i) invasion of privacy, including increased spam and targeted advertising they did not ask for; (ii) loss of confidentiality; (iii) embarrassment, emotional distress, humiliation and loss of enjoyment of life; (iv) lost time and opportunity costs associated with attempting to mitigate the consequences of the disclosure of their Private Information; (v) loss of benefit of the bargain; (vi) diminution of value of Private Information and (vii) the continued and ongoing risk to their Private Information.”  Id. at 9.  Piedmont argued that these damages theories stemming from “the provision of encrypted information only to Facebook” were implausible.  Id. at 7.  The Court agreed with Piedmont, rejected all of Plaintiffs’ damages theories.  Accordingly, it dismissed the remainder of Plaintiffs’ common-law claims.  As the Court explained: “No facts are alleged that would explain how receiving targeted advertisements from Facebook and Piedmont would plausibly cause any of the Plaintiffs to suffer these damages. This is not a case where the Plaintiffs’ personal information was stolen by criminal hackers with malicious intent. The Plaintiffs received targeted advertisements because they are Facebook users and have Facebook IDs. The Court finds the Plaintiffs’ damages theories untenable. Indeed, this court has rejected many identical theories arising under similar circumstances.”  Id. (collecting cases)

To state a claim for violation of the ECPA, also known as the federal wiretap act, a plaintiff must show an intentional interception of the contents of an electronic communication.  Id. at 11.  The ECPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication “unless such communication is intercepted for the purposes of committing a criminal or tortious act in violation of the Constitution or laws of the United States or any State.”  18 U.S.C. § 2511(2)(d)); 18 U.S.C. § 2511(2)(d).  Piedmont argued that it could not have intercepted the same transmission it received on its website, nor could it have acted with a tortious or criminal purpose in seeking to drive marketing and revenue.  Id. at 10-11.  In response, the Plaintiffs contended that they stated a plausible ECPA claim, arguing that Piedmont intercepted the contents of their PII and PHI when it acquired such information through the Meta Pixel on its website and that the party exception is inapplicable because Piedmont acted with criminal and tortious intent in “wiretapping” their PII and PHI.  Id. at 11.  The Court concisely concluded: “As was the case in the invasion of privacy context, the weight of persuasive authority in similar pixel tracking cases supports Piedmont’s position.”  Id. at 11-12 (collecting cases).

Implications For Companies

The holding of Piedmont is a win for adtech class action defendants and should be instructive for courts around the country.  While many adtech cases around the country have made it past a motion to dismiss, many have not, and, for many which continue to be filed regularly, it remains to be seen, Piedmont provides powerful precedent for any company defending against adtech class action claims for invasion of privacy, common-law claims for damages or unjust enrichment, and alleged violation of the federal wiretap act.

Illinois Federal Court Dismisses Class Action Privacy Claims Involving Use Of Samsung’s “Gallery” App

By Tyler Zmick, Justin Donoho, and Gerald L. Maatman, Jr.

Duane Morris Takeaways:  In G.T., et al. v. Samsung Electronics America, Inc., et al., No. 21-CV-4976, 2024 WL 3520026 (N.D. Ill. July 24, 2024), Judge Lindsay C. Jenkins of the U.S. District Court for the Northern District of Illinois dismissed claims brought under the Illinois Biometric Information Privacy Act (“BIPA”).  In doing so, Judge Jenkins acknowledged limitations on the types of conduct (and types of data) that can subject a company to liability under the statute.  The decision is welcome news for businesses that design, sell, or license technology yet do not control or store any “biometric” data that may be generated when customers use the technology.  The case also reflects the common sense notion that a data point does not qualify as a “biometric identifier” under the BIPA if it cannot be used to identify a specific person.  G.T. v. Samsung is required reading for corporate counsel facing privacy class action litigation.

Background

Plaintiffs — a group of Illinois residents who used Samsung smartphones and tablets — alleged that their respective devices came pre-installed with a “Gallery application” (the “App”) that can be used to organize users’ photos.  According to Plaintiffs, whenever an image is created on a Samsung device, the App automatically: (1) scans the image to search for faces using Samsung’s “proprietary facial recognition technology”; and (2) if it detects a face, the App analyzes the face’s “unique facial geometry” to create a “face template” (i.e., “a unique digital representation of the face”).  Id. at *2.  The App then organizes photos based on images with similar face templates, resulting in “pictures with a certain individual’s face [being] ‘stacked’ together on the App.”  Id.

Based on their use of the devices, Plaintiffs alleged that Samsung violated §§ 15(a) and 15(b) of the BIPA by: (1) failing to develop a written policy made available to the public establishing a retention policy and guidelines for destroying biometric data, and (2) collecting Plaintiffs’ biometric data without providing them with the requisite notice and obtaining their written consent.

Samsung moved to dismiss on two grounds, arguing that: (1) Plaintiffs did not allege that Samsung “possessed” or “collected” their biometric data because they did not claim the data ever left their devices; and (2) Plaintiffs failed to allege that data generated by the App qualifies as “biometric identifiers” or “biometric information” under the BIPA, because Samsung cannot use the data to identify Plaintiffs or others appearing in uploaded photos.

The Court’s Decision

The Court granted Samsung’s motion to dismiss on both grounds.

“Possession” And “Collection” Of Biometric Data

Regarding Samsung’s first argument, the Court began by explaining what it means for an entity to be “in possession of” biometric data under § 15(a) and to “collect” biometric data under § 15(b).  The Court observed that “possession” occurs when an entity exercises control over data or holds it at its disposal.  Regarding “collection,” the Court noted that the term “collect,” and the other verbs used in § 15(b) (“capture, purchase, receive through trade, or otherwise obtain”), all refer to an entity taking an “active step” to gain control of biometric data.

The Court proceeded to consider Plaintiffs’ contention that Samsung was “in possession of” their biometrics because Samsung controls the proprietary software used to operate the App.  The Court sided with Samsung, however, concluding that Plaintiffs failed to allege “possession” (and thus failed to state a § 15(a) claim) because they did not allege that Samsung can access the data (as opposed to the technology Samsung employs).  Id. at *9 (“Samsung controls the App and its technology, but it does not follow that this control gives Samsung dominion over the Biometrics generated from the App, and plaintiffs have not alleged Samsung receives (or can receive) such data.”).

As for § 15(b), the Court rejected Plaintiffs’ argument that Samsung took an “active step” to “collect” their biometrics by designing the App to “automatically harvest[] biometric data from every photo stored on the Device.”  Id. at *11.  The Court determined that Plaintiffs’ argument failed for the same reason their § 15(a) “possession” argument failed.  Id. at *11-12 (“Plaintiffs’ argument again conflates technology with Biometrics. . . . Plaintiffs do not argue that Samsung possesses the Data or took any active steps to collect it.  Rather, the active step according to Plaintiffs is the creation of the technology.”).

“Biometric Identifiers” And “Biometric Information”

The Court next turned to Samsung’s second argument for dismissal – namely, that Plaintiffs failed to allege that data generated by the App is “biometric” under the BIPA because Samsung could not use it to identify Plaintiffs (or others appearing in uploaded photos).

In opposing this argument, Plaintiffs asserted that: (1) the “App scans facial geometry, which is an explicitly enumerated biometric identifier”; and (2) the “mathematical representations of face templates” stored through the App constitute “biometric information” (i.e., information “based on” scans of Plaintiffs’ “facial geometry”).  Id. at *13.

The Court ruled that “Samsung has the better argument,” holding that Plaintiffs’ claims failed because Plaintiffs did not allege that Samsung can use data generated through the App to identify specific people.  Id. at *15.  The Court acknowledged that cases are split “on whether a plaintiff must allege a biometric identifier can identify a particular individual, or if it is sufficient to allege the defendant merely scanned, for example, the plaintiff’s face or retina.”  Id. at *13.  After employing relevant principles of statutory interpretation, the Court sided with the cases in the former category and opined that “the plain meaning of ‘identifier,’ combined with the BIPA’s purpose, demonstrates that only those scans that can identify an individual qualify.”  Id. at *15.

Turning to the facts alleged in the Complaint, the Court concluded that Plaintiffs failed to state claims under the BIPA because the data generated by the App does not amount to “biometric identifiers” or “biometric information” simply because the data can be used to identify and group the unique faces of unnamed people.  In other words, biometric information must be capable of recognizing an individual’s identity – “not simply an individual’s feature.”  Id. at *17; see also id. at *18 (noting that Plaintiffs claimed only that the App groups unidentified faces together, and that it is the device user who can add names or other identifying information to the faces).

Implications Of The Decision

G.T. v. Samsung is one of several recent decisions grappling with key questions surrounding the BIPA, including questions as to: (1) when an entity engages in conduct that rises to the level of “possession” or “collection” of biometrics; and (2) what data points qualify (and do not qualify) as “biometric identifiers” and “biometric information” such that they are subject to regulation under the statute.

Regarding the first question, the Samsung case reflects the developing majority position among courts – i.e., a company is not “in possession of,” and has not “collected,” data that it does not actually receive or access, even if it created and controlled the technology that generated the allegedly biometric data.

As for the second question, the Court’s decision in Samsung complements the Ninth Circuit’s recent decision in Zellmer v. Meta Platforms, Inc., where it held that a “biometric identifier” must be capable of identifying a specific person.  See Zellmer v. Meta Platforms, Inc., 104 F.4th 1117, 1124 (9th Cir. 2024) (“Reading the statute as a whole, it makes sense to impose a similar requirement on ‘biometric identifier,’ particularly because the ability to identify did not need to be spelled out in that term — it was readily apparent from the use of ‘identifier.’”).  Courts have not uniformly endorsed this reading, however, and parties will likely continue litigating the issue unless and until the Illinois Supreme Court provides the final word on what counts as a “biometric identifier” and “biometric information.”

California Federal Court Denies Motion To Dismiss Artificial Intelligence Employment Discrimination Lawsuit

By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller

Duane Morris Takeaways:  In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80)Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.  

This development follows Workday’s first successful Motion to Dismiss, which we blogged about here, and the EEOC’s amicus brief filing, which we blogged on here

Case Background

Plaintiff is an African American male over the age of 40, with a bachelor’s degree in finance from Morehouse College, an all-male Historically Black College and University, and an honors graduate degree. Id. at 2. Plaintiff also alleges he suffered from anxiety and depression.  Since 2017, Plaintiff applied to over 100 jobs with companies that use Workday’s screening tools.  In many applications, Plaintiff alleges he was required to take a “Workday-branded assessment and/or personality test.”  Plaintiff asserts these assessments “likely . . . reveal mental health disorders or cognitive impairments,” so others who suffer from anxiety and depression are “likely to perform worse  … and [are] screened out.”  Id. at 2-3.  Plaintiff was allegedly denied employment through Workday’s platform across all submitted applications.

Plaintiff alleges Workday’s algorithmic decision-making tools discriminate against job applicants who are African-American, over the age of 40, and/or are disabled.  Id. at 3.  In support of these allegations, Plaintiff claims that in one instance, he applied for a position at 12:55 a.m. and his application was rejected less than an hour later.  Plaintiff brought claims under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (“Section 1981”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the ADA Amendments Act of 2008 (“ADA”), for intentional discrimination on the basis of race and age, and disparate impact discrimination on the basis of race, age, and disability. Plaintiff also brings a claim for aiding and abetting race, disability, and age discrimination against Workday under California’s Fair Employment and Housing Act (“FEHA”).  Workday moved to dismiss, where Plaintiff’s opposition was supported by an amicus brief filed by the EEOC.

The Court’s Decision

The Court granted in part and denied in part Workday’s motion to dismiss.  At the outset of its opinion, the Court noted that Plaintiff alleged Workday was liable for employment discrimination, under Title VII, the ADEA, and the ADA, on three theories: as an (1) employment agency; (2) agent of employers; and (3) an indirect employer. Id. at 5.

The Court opined that relevant statute prohibits discrimination “not just by employers but also by agents of those employers,” so an employer cannot “escape liability for discrimination by delegating [] traditional functions, like hiring, to a third party.”  Id.  Therefore, an employer’s agent can be independently liable when the employer has delegated to the agent “functions [that] are traditionally exercised by the employer.”  Id.

In regards to the “employment agency” theory, the Court reasoned employment agencies “procure employees for an employer” – meaning – “they find candidates for an employer’s position; they do not actually employ those employees.”  Id. at 7.  The Court further reasoned employment agencies are liable when they “fail or refuse to refer” individuals for consideration by employers on prohibited bases.  Id. The Court held Plaintiff did not sufficiently allege Workday finds employees for employers such that Workday is an employment agency.  Accordingly, the Court granted Workday’s motion to dismiss with respect to the anti-discrimination statutes based on an employment agency theory, without leave to amend.

In addition, the Court held that Workday may be liable on an agency theory, as Plaintiff plausibly alleged Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.  Id.  The Court determined if it reasoned otherwise, and accepted Workday’s arguments, then companies would “escape liability for hiring decisions by saying that function has been handed to over to someone else (or here, artificial intelligence).”  Id. at 8.  The Court determined Plaintiff’s allegations that Workday’s decision-making tools “make hiring decisions” as it’s software can “automatically disposition[] or move[] candidates forward in the recruiting process” were plausible.  Id. at 9.

The Court opined that given Workday’s allegedly “crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”  Id.  In regards to artificial intelligence, the Court noted “Workday’s role in the hiring process was no less significant because it allegedly happens through artificial intelligence,” and the Court declined to “draw[] an artificial distinction between software decision-makers and human decision-makers,” [sic] as any distinction would “gut anti-discrimination laws in the modern era.”  Id. at 10.

Accordingly, the Court denied Workday’s motion to dismiss Plaintiff’s federal discrimination claims.

Disparate Impact Claims

The Court next denied Workday’s motion to dismiss Plaintiff’s disparate impact discrimination claims as Plaintiff adequately alleged all elements of a prima facie case for disparate impact.

First, Plaintiff’s amended complaint asserted that Workday’s use of algorithmic decision-making tools to screen applicants including training data from personality tests had a disparate impact on job-seekers in certain protected categories.  Second, the Court similarly found disparate treatment present and recognized Plaintiff’s assertions were not typical.  “Unlike a typical employment discrimination case where the dispute centers on the plaintiff’s application to a single job, [Plaintiff] has applied to and been rejected from over 100 jobs for which he was allegedly qualified.”  Id. at 14.  The Court reasoned the “common denominator” for these positions was Workday and the platform Workday provided to companies for application intake and screening.  Id.

The Court held “[t]he zero percent success rate at passing Workday’s initial screening” combined with Plaintiff’s allegations of bias in Workday’s training data and tools plausibly supported an inference that Workday’s algorithmic tools disproportionately rejects applicants based on factors other than qualifications, such as a candidate’s race, age, or disability.  Id. at 15.  The Court therefore denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA.  Id. at 16.

Intentional Discrimination Claims

The Court granted Workday’s motion to dismiss Plaintiff’s claims that Workday intentionally discriminated against him based on race and age.  Id.  The Court found that Plaintiff sufficiently alleged he was qualified through his various degrees and qualifications and areas of expertise, supported by his work experience.  However, the Court found Plaintiff’s allegations that Workday intended its screening tools to be discriminatory as “Workday [was] aware of the discriminatory effects of its applicant screening tools” was not enough to satisfy his pleading burden.  Id. at 18.  Accordingly, the Court granted Workday’s motion to dismiss Plaintiff’s intentional discrimination claims under Title VII, the ADEA, and § 1981, without leave to amend, but left open the door for Plaintiff to amend if a discriminatory intention is revealed during future discovery.  Id.   Finally, the Court granted Workday’s motion to dismiss Plaintiff’s California’s Fair Employment and Housing Act with leave to amend.

Implications For Employers

The Court’s resolution of employer liability for software vendors that provide AI-screening tools for employers centered on whether those tools were involved in “traditional employment decisions.”  Here, the Court held that Plaintiff sufficiently alleged that Workday was an agent for employers since it made employment decisions in the screening process through the use of artificial intelligence.

This decision likely will be used as a roadmap for the plaintiffs’ bar to bring discrimination claims against third-party vendors involved in the employment decision process, especially those using algorithmic software to make those decisions. Companies should also take heed, especially given the EEOC’s prior guidance that suggests employers should be auditing their vendors for the impact of their use of artificial intelligence.

California Federal Court Refuses To Dismiss Wiretapping Class Action Involving Company’s Use Of Third-Party AI Software

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan Norimoto

Duane Morris Takeaways:  On July 5, 2024, in Jones, et al. v. Peloton Interactive, Inc., No. 23-CV-1082, 2024 WL 3315989 (S.D. Cal. July 5, 2024), Judge M. James Lorenz of the U.S. District Court for the Southern District of California denied a motion to dismiss a class action complaint alleging that a company’s use of a third party AI-powered chat feature embedded in the company’s website aided and abetted an interception in violation of the California Invasion of Privacy Act (CIPA).  Judge Lorenz was unpersuaded by the company’s arguments that the third-party functioned as an extension of the company rather than as a third-party eavesdropper.  Instead, the Court found that the complaint had sufficient facts to plausibly allege that the third party used the chats to improve its own AI algorithm and thus was more akin to a third-party eavesdropper for which the company could be held liable for aiding and abetting wiretapping under the CIPA.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that third-party AI-powered software embedded in defendants’ websites or other processes and technologies captured plaintiffs’ information and sent it to the third party.  A common claim raised in these cases is a claim under federal or state wiretap acts and seeking hundreds of millions or billions of dollars in statutory damages.  No wiretap claim can succeed, however, where the plaintiff has consented to the embedded technology’s receipt of their communications.  See, e.g., Smith v. Facebook, Inc., 262 F. Supp. 3d 943, 955 (N.D. Cal. 2017) (dismissing CIPA claim involving embedded Meta Pixel technology because plaintiffs consented to alleged interceptions by Meta via their Facebook user agreements).

In Jones, Plaintiffs brought suit against an exercise equipment and media company.  According to Plaintiffs, the defendant company used third-party software embedded in its website’s chat feature.  Id. at *1.  Plaintiffs further alleged that the software routed the communications directly to the third party without Plaintiffs’ consent, thereby allowing the third party to use the content of the communications to “to improve the technological function and capabilities of its proprietary, patented artificial intelligence software.”  Id. at **1, 4.

Based on these allegations, Plaintiffs alleged a claim for aiding and abetting an unlawful interception and use of the intercepted information under California’s wiretapping statute, CIPA § 631.  Id. at *2.  Although Plaintiffs did not allege any actual damages, see ECF No. 1, the statutory damages they sought totaled at least $1 billion.  See id. ¶ 33 (alleging hundreds of thousands of class members); Cal. Penal Code. § 637.2 (setting forth statutory damages of $5,000 per violation).  The company moved to dismiss under Rule 12(b)(6), arguing that the “party exception” to CIPA applied because the third-party software “functions as an extension of [the company] rather than as a third-party eavesdropper.”  2024 WL 3315989, at *2.

The Court’s Opinion

The Court denied the company’s motion and allowed Plaintiffs’ CIPA claim to proceed to discovery.

The CIPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication.  Id. at *2.  To answer the question for purposes of CIPA’s party exception of whether the embedded chat software provider was more akin to a party or a third-party eavesdropper, the Court found that courts look to the “technical context of the case.”  Id. at *3.  As the Court explained, a software provider can be held liable as a third party under CIPA if that entity listens in on a consensual conversation where the entity “uses the collected data for its own commercial purposes.”  Id.  By contrast, the Court further explained, if the software provider merely collects, refines, and relays the information obtained on the company website back to the company “in aid of [defendant’s] business” then it functions as a tool and not as a third party.  Id.

Guided by this framework, the Court found sufficient allegations that the software provider used the chats collected on the company’s website for its own purposes of improving its AI-driven algorithm.  Id. at *4.  Therefore, according to the Court, the complaint sufficiently alleged that the software provider was “more than a mere ‘extension’” of the company, such that CIPA’s party exemption did not apply and Plaintiffs sufficiently stated a claim for the company’s aiding and abetting of the software provider’s wiretap violation.  Id.

Implications For Companies

The Court’s opinion serves as a cautionary tale for companies using third-party AI-powered processes and technologies that collect customer communications and information.  As the ruling shows, litigation risk associated with companies’ use of third-party AI-powered algorithms is not limited to complaints alleging damaging outcomes such as discriminatory impacts, such as plaintiffs alleged in Louis v. Saferent Sols., LLC, 685 F. Supp. 3d 19, 41 (D. Mass. 2023) (denying motion to dismiss claim under Fair Housing Act against landlord in conjunction with landlord’s use of algorithm used to calculate risk of leasing a property to a particular tenant).  In addition, companies face the risk of high-stakes claims for statutory damages under wiretap statutes associated with companies’ use of third-party AI-powered algorithms embedded in their websites, even if the third party’s only use of the algorithm is to improve the algorithm and even if no actual damages are alleged.

As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, organizations should consider in light of Jones whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI in additional detail.  Doing so could deter or help defend a future AI class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ wiretap acts and consumer fraud acts, and seeking multimillion-dollar and billion-dollar statutory damages.

California Federal Court Rejects AI Class Action Plaintiffs’ Cherry-Picking Of AI Algorithm Test Results And Orders Production Of All Results And Account Settings

By Gerald L. Maatman, Jr., Justin R. Donoho, and Brandon Spurlock

Duane Morris TakeawaysOn June 24, 2024, Magistrate Judge Robert Illman of the U.S. District Court for the Northern District of California ordered a group of authors alleging copyright infringement by a maker of generative artificial intelligence to produce information relating to pre-suit algorithmic testing in Tremblay v. OpenAI, Inc., No. 23-CV-3223 (N.D. Cal. June 13, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of AI and relying on cherry-picked results from their testing of the AI-based algorithms at issue cannot simultaneously withhold during discovery their negative testing results and the account settings used to produce any results.  The Court’s reasoning applies not only in gen AI cases, but also other AI cases such as website advertising technology cases.

Background

This case is one of over a dozen class actions filed in the last two years alleging that makers of generative AI technologies violated copyright laws by training their algorithms on copyrighted content, or that they violated wiretapping, data privacy, and other laws by training their algorithms on personal information.

It is also one of the hundreds of class actions filed in the last two years involving AI technologies that perform not only gen AI but also facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, and more.

In Tremblay v. OpenAI, plaintiffs (a group of authors) allege that an AI company trained its algorithm by “copying massive amounts of text” to enable it to “emit convincingly naturalistic text outputs in response to user prompts.”  Id. at 1.  Plaintiffs allege these outputs include summaries that are so accurate that the algorithm must retain knowledge of the ingested copyrighted works in order to output similar textual content.  Id. at 2.  An exhibit to the complaint displaying the algorithm’s prompts and outputs purports to support these allegations.  Id.

The AI company sought discovery of (a) the account settings; and (b) the algorithm’s prompts and outputs that “did not” include the plaintiffs’ “preferred, cherry-picked” results.  Id. (emphasis in original).  The plaintiffs refused, citing work-product privilege, which protects from discovery documents prepared in anticipation of litigation or for trial.  The AI company argued that the authors waived that protection by revealing their preferred prompts and outputs, and asked the court to order production of the negative prompts and outputs, too, and all related account settings.  Id. at 2-3.

The Court’s Decision

The Court agreed with the AI company and ordered production of the account settings and all of plaintiffs’ pre-suit algorithmic testing results, including any negative ones, for four reasons.

First, the Court held that the algorithmic testing results were not work product but “more in the nature of bare facts.”  Id. at 5-6.

Second, the Court determined that “even assuming arguendo” that the work-product privilege applied, the privilege was waived “by placing a large subset of these facts in the [complaint].”  Id. at 6.

Third, the Court reasoned that the negative testing results were relevant to the AI company’s defenses, notwithstanding the plaintiffs’ argument that the negative testing results were irrelevant to their claims.  Id. at 6.

Finally, the Court rejected the plaintiffs’ argument that the AI company can simply interrogate the algorithm itself.  As the Court explained, “without knowing the account settings used by Plaintiffs to generate their positive and negative results, and without knowing the exact formulation of the prompts used to generate Plaintiffs’ negative results, Defendants would be unable to replicate the same results.”  Id.

Implications For Companies

This case is a win for defendants of class actions based on alleged outputs of AI-based algorithms.  In such cases, the Tremblay decision can be cited as useful precedent for seeking discovery from recalcitrant plaintiffs of all of plaintiffs’ pre-suit prompts and outputs, and all related account settings.  The court’s fourfold reasoning in Tremblay applies not only in gen AI cases but also other AI cases.  For example, in website advertising technology (adtech) cases, plaintiffs should not be able to withhold their adtech settings (the account settings), their browsing histories and behaviors (the prompts), and all documents relating to targeted advertising they allegedly received as a result, any related purchases, and alleged damages (the outputs).  As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, the implications of Tremblay may reach far and wide.

Illinois Federal Court Rejects Class Action Because An AI-Powered Porn Filter Does Not Violate The BIPA

By Gerald L. Maatman, Jr., Justin R. Donoho, and Tyler Z. Zmick

Duane Morris TakeawaysIn a consequential ruling on June 13, 2024, Judge Sunil Harjani of the U.S. District Court for the Northern District of Illinois dismissed a class action brought under the Illinois Biometric Information Privacy Act (BIPA) in Martell v. X Corp., Case No. 23-CV-5449, 2024 WL 3011353 (N.D. Ill. June 13, 2024).  The ruling is significant as it shows that plaintiffs alleging that cutting-edge technologies violate the BIPA face significant hurdles to support the plausibility of their claims when the technology neither performs facial recognition nor records distinct facial measurements as part of any facial recognition process.

Background

This case is one of over 400 class actions filed in 2023 alleging that companies improperly obtained individuals’ biometric identifiers and biometric information in violation of the BIPA.

In Martell v. X Corp., Plaintiff alleged that he uploaded a photograph containing his face to the social media platform “X” (formerly known as Twitter), which X then analyzed for nudity and other inappropriate content using a product called “PhotoDNA.”  According to Plaintiff, PhotoDNA created a unique digital signature of his face-containing photograph known as a “hash” to compare against the hashes of other photographs, thus necessarily obtaining a “scan of … face geometry” in violation of the BIPA, 740 ILCS 14/10.

X Corp. moved to dismiss Plaintiff’s BIPA claim, arguing, among other things, that Plaintiff failed to allege that PhotoDNA obtained a scan of face geometry because (1) PhotoDNA did not perform facial recognition; and (2) the hash obtained by PhotoDNA could not be used to re-identify him.

The Court’s Opinion And Its Dual Significance

The Court granted X Corp.’s motion to dismiss based on both of these arguments.  First, the Court found no plausible allegations of a scan of face geometry because “PhotoDNA is not facial recognition software.”  Martell, 2024 WL 3011353, at *2 (N.D. Ill. June 13, 2024).  As the Court explained, “Plaintiff does not allege that the hash process takes a scan of face geometry, rather he summarily concludes that it must. The Court cannot accept such conclusions as facts adequate to state a plausible claim.”  Id. at *3.

In other cases in which plaintiffs have brought BIPA claims involving face-related technologies performing functions other than facial recognition, companies have received mixed rulings when challenging the plausibility of allegations that their technologies obtained facial data “biologically unique to the individual.”  740 ILCS 14/5(c).  BIPA defendants have been similarly successful at the pleading stage as X Corp., for example, in securing dismissal of BIPA lawsuits involving virtual try­-on technologies that allow customers to use their computers to visualize glasses, makeup, or other accessories on their face.  See Clarke v. Aveda Corp., 2023 WL 9119927, at *2 (N.D. Ill. Dec. 1, 2023); Castelaz v. Estee Lauder Cos., Inc., 2024 WL 136872, at *7 (N.D. Ill. Jan. 10, 2024).  Defendants have been less successful at the pleading stage and continue to litigate their cases, however, in cases involving software verifying compliance with U.S. passport photo requirements, Daichendt v. CVS Pharmacy, Inc., 2023 WL 3559669, at *2 (N.D. Ill. May 4, 2023), and software detecting fever from the forehead and whether the patient is wearing a facemask, Trio v. Turing Video, Inc., 2022 WL 4466050, at *13 (N.D. Ill. Sept. 26, 2022).  Martell bolsters these mixed rulings in non-facial recognition cases in favor of defendants, with its finding that mere allegations of verification that a face-containing picture is not pornographic are insufficient to establish that the defendant obtained any biometric identifier or biometric information.

Second, the Court found no plausible allegations of a scan of face geometry because “Plaintiff’s Complaint does not include factual allegations about the hashes including that it conducts a face geometry scan of individuals in the photo.”  Martell, 2024 WL 3011353, at *3.  Instead, the Court found, obtaining a scan of face geometry means “zero[ing] in on [a face’s] unique contours to create a ‘template’ that maps and records [the individual’s] distinct facial measurements.”  Id.

This holding is significant and has potential implications for BIPA suits based on AI‑based, modern facial recognition systems in which the AI transforms photographs into numerical expressions that can be compared to determine their similarity, similar to the way X Corp.’s PhotoDNA transformed a photograph containing a face into a unique numerical hash.  Older, non-AI facial recognition systems in place at the time of the BIPA’s enactment in 2008, by contrast, attempt to identify individuals by using measurements of face geometry that identify distinguishing features of each subject’s face.  These older systems construct a facial graph from key landmarks such as the corners of the eyes, tip of the nose, corners of the mouth, and chin.  Does AI-based facial recognition — which does not “map[] and record[] … distinct facial measurements” (id. at *3) like these older systems — perform a scan of face geometry under the BIPA?  One court addressing this question raised in opposing summary judgment briefs and opined on by opposing experts held: “This is a quintessential dispute of fact for the jury to decide.”  In Re Facebook Biometric Info. Priv. Litig., 2018 WL 2197546, at *3 (N.D. Cal. May 14, 2018).  In short, whether AI-based facial recognitions systems violate the BIPA remains “the subject of debate.”  “The Sedona Conference U.S. Biometric Systems Privacy Primer,” The Sedona Conference Journal, vol. 25, at 200 (May 2024).  The Court’s holding in Martell adds to this mosiac and suggests that plaintiffs challenging AI­-based facial recognition systems under the BIPA will have significant hurdles to prove that the technology obtains a scan of face geometry.

Implications for Companies

The Court’s dismissal of conclusory allegations is a win for defendants’ whose cutting-edge technologies neither perform facial recognition nor record distinct facial measurements as part of any facial recognition process.  While undoubtedly litigation over the BIPA will continue, the Martell decision supplies useful precedent for companies facing BIPA lawsuits containing insufficient allegations that they have obtained a scan of facial geometry unique to an individual.

District Court Dismisses Data Privacy Class Action Against Health Care System For Failure To Sufficiently Allege Disclosure of PHI

By Gerald L. Maatman, Jr., Jennifer A. Riley, Justin Donoho, and Ryan T. Garippo

Duane Morris Takeaways:  On June 10, 2024, in Smart, et al. v. Main Line Health, Inc., No. 22-CV-5239, 2024 WL 2943760 (E.D. Pa. June 10, 2024), Judge Kai Scott of the U.S. District Court for the Eastern District of Pennsylvania dismissed in its entirety a class action complaint alleging that a nonprofit health system’s use of website advertising technology disclosed the plaintiff’s protected health information (“PHI”) in violation of the federal wiretap act and in commission of the common-law torts of negligence and invasion of privacy.  The ruling is significant because it shows that such claims cannot surmount Rule 12(b)(6)’s plausibility standard without specifying the PHI allegedly disclosed.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today; millions of companies and governmental organizations utilize it.  (See, e.g., Customer Data Platform Institute, “Trackers and Pixels Feeding Data Broker Stores” (reporting that “47% of websites using Meta Pixel, including 55% of S&P 500, 58% of retail, 42% of financial, and 33% of healthcare”); BuiltWith, “Facebook Pixel Usage Statistics” (offering access to data on over 14 million websites using the Meta Pixel and stating “[w]e know of 5,861,028 live websites using Facebook Pixel and an additional 8,181,093 sites that used Facebook Pixel historically and 2,543,263 websites in the United States”).)

In these lawsuits, plaintiffs generally allege that the defendant organization’s use of adtech violated federal and state wiretap statutes, consumer fraud statutes, and other laws, and they often seek hundreds of millions of dollars in statutory damages.  Plaintiffs have focused the bulk of their efforts to date on healthcare providers, but they have filed suits that span nearly every industry including retailers, consumer products, and universities.

In Smart, 2024 WL 2943760, at *1, Plaintiff brought suit against Main Line Health, Inc. (“Main Line”), “a non-profit health system.”  According to Plaintiff, Main Line installed the Meta Pixel on its public-facing website – not on its secure patient portal, id. at *1 n.2 – and thereby transmitted web-browsing information entered by users on the public-facing website such as:

“characteristics of individual patients’ communications with the [Main Line] website (i.e., their IP addresses, Facebook ID, cookie identifiers, device identifiers and account numbers) and the content of these communications (i.e., the buttons, links, pages, and tabs they click and view).”

Id. (quotations omitted).

Based on these allegations, Plaintiff alleged claims for violation of the Electronic Communications Privacy Act (ECPA), negligence, and invasion of privacy.  Main Line moved to dismiss under Rule 12(b)(6) for failure to state sufficient facts that, if accepted as true, would state a claim for relief that is plausible on its face.

The Court’s Opinion

The Court agreed with Main Line and dismissed all three of Plaintiff’s claims.

To state a claim for violation of the ECPA, also known as the federal wiretap act, a plaintiff must show an intentional interception of the contents of an electronic communication using a device.  Main Line, 2024 WL 2943760, at *3.  The ECPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication “unless such communication is intercepted for the purposes of committing a criminal or tortious act in violation of the Constitution or laws of the United States or any State.”  Id. (quoting 18 U.S.C. § 2511(2)(d)); 18 U.S.C. § 2511(2)(d).

Plaintiff argued that he plausibly alleged Main Line’s criminal or tortious purpose because, under the Health Insurance Portability and Accountability Act (“HIPAA”), it is a federal crime for a health care provider to knowingly disclose PHI to another person.  The district court rejected this argument, finding Plaintiff failed to allege sufficient facts to support an inference that Main Line disclosed his PHI.  As the district court explained: “Plaintiff has not alleged which specific web pages he clicked on for his medical condition or his history of treatment with Main Line Health.”  Id. at 3 (collecting cases).

In short, the district court concluded that Plaintiff’s failure to sufficiently allege PHI was reason alone for the Court to dismiss Plaintiff’s ECPA claim.  Thus, the district court did not need to address other reasons that may have required dismissal of Plaintiff’s ECPA claims, such as (1) lack of criminal or tortious intent even if PHI had been sufficiently alleged, see, e.g., Katz-Lacabe v. Oracle Am., Inc., 668 F. Supp. 3d 928, 945 (N.D. Cal. 2023) (dismissing wiretap claim because defendant’s “purpose has plainly not been to perpetuate torts on millions of Internet users, but to make money”); Nienaber v. Overlake Hosp. Med. Ctr., 2024 WL 2133709, at *15 (W.D. Wash. May 13, 2024) (dismissing wiretap claim because “Plaintiff fails to plead a tortious or criminal use of the acquired communications, separate from the recording, interception, or transmission”); and (2) lack of any interception, see, e.g., Allen v. Novant Health, Inc., 2023 WL 5486240, at *4 (M.D.N.C. Aug. 24, 2023) (dismissing wiretap claim because an intended recipient cannot “intercept”); Glob. Pol’y Partners, LLC v. Yessin, 686 F. Supp. 2d 631, 638 (E.D. Va. 2009) (dismissing wiretap claim because the communication was sent as a different communication, not “intercepted”).

On Plaintiff’s remaining claims, the district court held that lack of sufficiently pled PHI defeated the causation element of Plaintiff’s negligence claim and defeated the element of Plaintiff’s invasion of privacy claim that any intrusion must have been “highly offensive to a reasonable person.”  Main Line, 2024 WL 2943760, at *4.

Implications For Companies

The holding of Main Line is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have described the statutory damages imposed by ECPA as “draconian.”  See, e.g., DIRECTTV, Inc. v. Beecher, 296 F. Supp. 2d 937, 943 (S.D. Ind. 2003).  Main Line shows that, for adtech plaintiffs to plausibly plead claims for ECPA violations, negligence, or invasion of privacy, they at least need to identify what allegedly private information allegedly was disclosed via the adtech, in addition to surmounting additional hurdles under ECPA such as plausibly pleading criminal or tortious intent and an interception.

Four Best Practices For Deterring Cybersecurity And Data Privacy Class Actions And Mass Arbitrations

By Justin Donoho

Duane Morris Takeaway: Class action lawsuits and mass arbitrations alleging cybersecurity incidents and data privacy violations are rising exponentially.  Corporate counsel seeking to deter such litigation and arbitration demands from being filed against their companies should keep in mind the following four best practices: (1) add or update arbitration clauses to mitigate the risks of mass arbitration; (2) use cybersecurity best practices, including continuously improving and prioritizing compliance activities; (3) audit and adjust uses of website advertising technologies; and (4) update website terms of use, data privacy policies, and vendor agreements.

Best Practices

  1. Add or update arbitration agreements to mitigate the risks of mass arbitration

Many organizations have long been familiar with the strategy of deterring class and collective actions by presenting arbitration clauses containing class and collective action waivers prominently for web users, consumers, and employees to accept via click wrap, browse wrap, login wrap, shrink wrap, and signatures.  Such agreements would require all allegedly injured parties to file individual arbitrations in lieu of any class or collective action.  Moreover, the strategy goes, filing hundreds, thousands, or more individual arbitrations would be cost-prohibitive for so many putative plaintiffs and thus deter them from taking any action against the organization in most cases.

Over the last decade, this strategy of deterrence was effective.[1]  Times have changed.  Now enterprising plaintiffs’ attorneys with burgeoning war chests, litigation funders, and high-dollar novel claims for statutory damages are increasingly using mass arbitration to pressure organizations into agreeing to multimillion dollar settlements, just to avoid the arbitration costs.  In mass arbitrations filed with the American Arbitration Association (AAA) or Judicial Arbitration and Mediation Services (JAMS), for example, fees can total millions of dollars just to defend only 500 individual arbitrations.[2]  One study found upfront fees ranging into the tens of millions of dollars for some large mass arbitrations.[3]  Companies with old arbitration clauses have been caught off guard with mass arbitrations, have sought relief from courts to avoid having to defend these mass arbitrations, and this relief was rejected in several recent decisions where the court ordered the defendant to arbitrate and pay the required hefty mass arbitration fees.[4]

If your organization has an arbitration clause, then one of the first challenges for counsel defending many newly served class action lawsuits these days is determining whether to move to compel arbitration.  Although it could defeat the class action, is it worth the risk of mass arbitration and the potential projected costs of mass arbitration involved?  Sometimes not.

Increasingly organizations are mitigating this risk by including mechanisms in their arbitration clauses such as pre-dispute resolution clauses, mass arbitration waivers, bellwether procedures, arbitration case filing requirements, and more.  This area of the law is developing quickly.  One case to watch will be one of the first appellate cases to address the latest trend of mass arbitrations — Wallrich v. Samsung Electronics America, Inc., No. 23-2842 (7th Cir.) (argued February 15, 2024, at issue is whether the district court erred in ordering the BIPA defendant to pay over $4 million in mass arbitration fees).

  1. Use cybersecurity best practices, including continuously improving and prioritizing

IT organizations have long been familiar with the maxim that they should continuously improve their cybersecurity measures and other IT services.  Continuous improvement is part of many IT industry guidelines, such as ISO 27000, COBIT, ITIL, the NIST Cybersecurity Framework (CSF) and Special Publication 800, and the U.S. Department of Energy’s Cybersecurity Capability Maturity Model (C2M2).  Continuous improvement is becoming increasingly necessary in cybersecurity, as organizations’ IT systems and cybercriminals’ tools multiply at an increased rate.  The volume of data breach class actions doubled three times from 2019-2023:

Continuous improvement of cybersecurity measures needs to accelerate accordingly.  As always, IT organizations need to prioritize.  Priorities typically include:

  • improving IT governance;
  • complying with industry guidelines such as ISO, COBIT, ITIL, NIST, and C2M2;
  • deploying multifactor authentication, network segmentation, and other multilayered security controls;
  • staying current with identifying, prioritizing, and patching security holes as new ones continuously arise;
  • designing and continuously improving a cybersecurity incident response plan;
  • routinely practicing handling ransomware incidents with tabletop exercises (may be covered by cyber-insurers); and
  • implementing and continuously improving security information and event management (SIEM) systems and processes.

Measures like these to continuously improve and prioritize: (a) will help prevent a cybersecurity incident from occurring in the first place; and (b) if one occurs, will help the victim organization of cybertheft defend against plaintiffs’ arguments that the organization failed to use reasonable cybersecurity measures.

  1. Audit and adjust uses of website advertising technologies

In 2023, plaintiffs filed over 250 class actions alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies, respectively.  This software, often called website advertising technologies or “adtech” (and often referred to by plaintiffs as “tracking technologies”) is a common feature on many websites in operation today — millions of companies and governmental organizations have it.[5]  These lawsuits generally allege that the organization’s use of adtech violated federal and state wiretap statutes, consumer fraud statutes, and other laws, and often seek hundreds of millions of dollars in statutory damages.  The businesses targeted in these cases so far mostly have been healthcare providers but also span nearly every industry including retailers, consumer products, and universities.

Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided.  The legal landscape in this area has only begun to develop under many plaintiffs’ theories of liability, statutes, and common laws.  The adtech alleged has included not only Meta Pixel and Google Analytics but also dozens of the hundreds or thousands of other types of adtech.  All this legal uncertainty multiplied by requested statutory damages equals serious business risk to any organization with adtech on its public-facing website(s).

An organization may not know that adtech is present on its public-facing websites.  It could have been installed on a website by a vendor without proper authorization, for example, or as a default without any human intent by using some web publishing tools.

Organizations should consider whether to have an audit performed before any litigation arises as to which adtech is or has been installed on which web pages when and which data types were transmitted as a result.  Multiple experts specialize in such adtech audits and serve as expert witnesses should any litigation arise.  An adtech audit is relatively quick and inexpensive and it might be cost-beneficial for an organization to perform an adtech audit before litigation arises because: (a) it might convince an organization to turn off some of its unneeded adtech now, thereby cutting off any potential damages relating to that adtech in a future lawsuit; (b) in the event of a future lawsuit, such an audit would not be wasted — it is one of the first things adtech defendants typically perform upon being served with an adtech lawsuit; and (c) an adtech audit could assist in presently updating and modernizing website terms of use, data privacy policies, and vendor agreements (next topic).

  1. Update and modernize website terms of use, data privacy policies, and vendor agreements

Organizations should consider whether to modify their website terms of use and data privacy policies to describe the organization’s use of adtech in additional detail.  Doing so could deter or help defend a future adtech class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ consumer fraud acts, and seeking multimillion-dollar statutory damages.

Organizations should consider adding to contracts with website vendors and marketing vendors clauses that prohibit the vendor from incorporating any unwanted adtech into the organization’s public-facing websites.  That could help disprove the element of intent at issue in many claims brought under the recent explosion of adtech lawsuits.

Implications For Corporations: Implementation of these best practices is critical to mitigating risk and saving litigation dollars.  Click to learn more about the services Duane Morris provides in the practice areas of Class Action Litigation; Arbitration, Mediation, and Alternative Dispute Resolution; Cybersecurity; Privacy and Data Protection; Healthcare Information Technology; and Privacy and Security for Healthcare Providers.

 

 

[1] In 2015, for example, a large study found that of 33 banks that had engaged in practices relating to debit card overdrafts, 18 endured class actions and ended up paying out $1 billion to 29 million customers, whereas 15 had arbitration clauses and did not endure any class actions.  See Consumer Protection Financial Bureau (CPFB), Arbitration Study: Report to Congress, Pursuant to Dodd-Frank Wall Street Reform and Consumer Protection Act § 1028(a) at Section 8, available at https://files.consumerfinance.gov/f/201503_cfpb_arbitration-study-report-to-congress-2015.pdf.  These 15 with arbitration clauses paid almost nothing—less than 30 debit card customers per year in the entire nation filed any sort of arbitration dispute regarding their cards during the relevant timeframe.  See id. at Section 5, Table 1.  Another study of AT&T from 2003-2014 found similarly, concluding, “Although hundreds of millions of consumers and employees are obliged to use arbitration as their remedy, almost none do.”  Judith Resnik, Diffusing Disputes: The Public in the Private of Arbitration, the Private in Courts, and the Erasure of Rights, 124 Yale L.J. 2804 (2015).

[2] AAA, Consumer Mass Arbitration and Mediation Fee Schedule (amended and effective Jan. 15, 2024), available at https://www.adr.org/sites/default/files/Consumer_Mass_Arbitration_and_Mediation_Fee_Schedule.pdf; JAMS, Arbitration Schedule of Fees and Costs, available at https://www.jamsadr.com/arbitration-fees.

[3] J. Maria Glover, Mass Arbitration, 74 Stan. L. Rev. 1283, 1387 & Table 2 (2022).

[4] See, e.g., BuzzFeed Media Enters., Inc. v. Anderson, 2024 WL 2187054, at *1 (Del. Ch. May 15, 2024) (dismissing action to enjoin mass arbitration of claims brought by employees); Hoeg v. Samsung Elecs. Am., Inc., No. 23-CV-1951 (N.D. Ill. Feb. 2024) (ordering defendant of BIPA claims brought by consumers to pay over $300,000 in AAA filing fees); Wallrich v. Samsung Elecs. Am., Inc., 2023 WL 5935024 (N.D. Ill. Sept. 12, 2023) (ordering defendant of BIPA claims brought by consumers to pay over $4 million in AAA fees); Uber Tech., Inc. v. AAA, 204 A.D.3d 506, 510 (N.Y. App. Div. 2022) (ordering defendant of reverse discrimination claims brought by customers to pay over $10 million in AAA case management fees).

[5] See, e.g., Customer Data Platform Institute, “Trackers and pixels feeding data broker stores,” reporting “47% of websites using Meta Pixel, including 55% of S&P 500, 58% of retail, 42% of financial, and 33% of healthcare” (available at https://www.cdpinstitute.org/news/trackers-and-pixels-feeding-data-broker-data-stores/); builtwith, “Facebook Pixel Usage Statistics,” offering access to data on over 14 million websites using the Meta Pixel, stating, “We know of 5,861,028 live websites using Facebook Pixel and an additional 8,181,093 sites that used Facebook Pixel historically and 2,543,263 websites in the United States” (available at https://trends.builtwith.com/analytics/Facebook-Pixel).

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress