Ninth Circuit Dismisses Adtech Class Action For Lack Of Standing

By Gerald L. Maatman, Jr. and Justin Donoho

Duane Morris Takeaways:  On December 17, 2024, in Daghaly, et al. v. Bloomingdales.com, LLC, No. 23-4122, 2024 WL 5134350 (9th Cir. Dec. 17, 2024), the Ninth Circuit ruled that a plaintiff lacked Article III standing to bring her class action complaint alleging that an online retailer’s use of website advertising technology disclosed website visitors’ browsing activities in violation of the California Invasion of Privacy Act and other statutes.  The ruling is significant because it shows that adtech claims cannot be brought in federal court without specifying the plaintiffs’ web browsing activities allegedly disclosed. 

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today.

In Daghaly, Plaintiff brought suit against an online retailer.  According to Plaintiff, the retailer installed the Meta Pixel and other adtech on its public-facing website and thereby transmitted web-browsing information entered by visitors such as which products the visitor clicked on and whether the visitor added the product to his or her shopping cart or wish list.  Id., No. 23-CV-129, ECF No. 1 ¶¶ 44-45.  As for Plaintiff herself, she did not allege what she clicked on or what her web browsing activities entailed upon visiting the website, only that she accessed the website via the web browser on her phone and computer.  Id. ¶ 40.

Based on these allegations, Plaintiff alleged claims for violation of the California Invasion of Privacy Act (CIPA) and other statutes.  The district court dismissed the complaint for lack of personal jurisdiction.  Id., 697 F. Supp. 3d 996 (S.D. Cal. 2023).  Plaintiff appealed and, in its appellate response brief, the retailer argued for the first time that Plaintiff lacked Article III standing.

The Ninth Circuit’s Opinion

The Ninth Circuit agreed with the retailer, found that Plaintiff lacked standing, and remanded for further proceedings.

To allege Article III standing, as is required to bring suit in federal court, the Ninth Circuit opined that a plaintiff must “clearly allege facts demonstrating” that she “suffered an injury in fact that is concrete, particularized, and actual or imminent.”  Id., 2024 WL 5134350, at *2 (citing, e.g., TransUnion LLC v. Ramirez, 594 U.S. 413, 423 (2021)). 

Plaintiff argued that she sufficiently alleged standing via her allegations that she “visited” and “accessed” the website and was “subjected to the interception of her Website Communications.”  Id. at *1.  Moreover, Plaintiff argued, the retailer’s alleged disclosure to adtech companies of the fact of her visiting the retailer’s website sufficiently alleged an invasion of her privacy and thereby invoked Article III standing because the adtech companies could use this fact to stitch together a broader, composite picture of Plaintiffs’ online activities.  See oral argument, here.

The Ninth Circuit rejected these arguments. It found that Plaintiff “does not allege that she herself actually made any communications that could have been intercepted once she had accessed the website. She does not assert, for example, that she made a purchase, entered text, or took any actions other than simply opening the webpage and then closing it.”  Id., 2024 WL 5134350, at *1.As the Ninth Circuit explained during oral argument by way of example, it is not like the Plaintiff had alleged that she was shopping for underwear and that the retailer transmitted information about her underwear purchases.  Moreover, the Ninth Circuit found “no authority suggesting that the fact that she visited [the retailer’s website] (as opposed to information she might have entered while using the website) constitutes ‘contents’ of a communication within the meaning of CIPA Section 631.”  Id.

In short, the Ninth Circuit concluded that Plaintiff lacked Article III standing, and that this conclusion followed from Plaintiff’s failure to sufficiently allege the nature her web browsing activities giving rise to all of her statutory claims.  Id. at *2.  The Ninth Circuit remanded with instructions that the district court grant leave to amend if properly requested. 

Implications For Companies

The holding of Daghaly is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have found that an adtech plaintiff’s failure to identify what allegedly private information allegedly was disclosed via the adtech warrants dismissal under Rule 12(b)(6) for failure to plausibly plead various statutory and common-law claims.  See, e.g, our blog post about such a decision here.   Daghaly shows that adtech plaintiffs also need to identify what allegedly private information beyond the fact of a visit to an online retailer’s website was allegedly disclosed via the adtech, in order to have Article III standing to bring their federal lawsuit in the first place.

The FTC Issues Three New Orders Showing Its Increased 2024 Enforcement Activities Regarding AI And Adtech

By Gerald L. Maatman, Jr. and Justin R. Donoho

Duane Morris Takeaways: On December 3, 2024, the Federal Trade Commission (FTC) issued an order in In Re Intellivision Technologies Corp., (FTC Dec. 3, 2024) prohibiting an AI software developer from making misrepresentations that its AI-powered facial recognition software was free from gender and racial bias, and two orders in In Re Mobilewalla, Inc. (FTC Dec. 3, 2024), and In RE Gravy Analytics, Inc. (FTC Dec. 3, 2024), requiring data brokers to improve their advertising technology (adtech) privacy and security practices.  These three orders are significant in that they highlight that in 2024, the FTC has significantly increased its enforcement activities in the areas of AI and adtech.

Background

In 2024, the FTC brought and litigated at least 10 enforcement actions involving alleged deception about AI, alleged AI-powered fraud, and allegedly biased AI.  See the FTC’s AI case webpage located here.  This is a fivefold increase from the at least two AI-related actions brought by the FTC last year.  See id.  Just as private class actions involving AI are on the rise, so are the FTC’s AI-related enforcement actions.

This year the FTC also brought and litigated at least 21 enforcement actions categorized by the FTC as involving privacy and security.  See the FTC’s privacy and security webpage located here.  This is about twice the case activity by the FTC in privacy and data security cases compared with 2023.  See id.  Most of these new cases involve alleged unfair use of adtech, an area of recently increased litigation activity in private class actions, as well.

In short, this year the FTC officially achieved its “paradigm shift” of focusing enforcement activities on modern technologies and data privacy, as forecasted in 2022 by the FTC’s Director, Bureau of Consumer Protection, Samuel Levine, here.

All these complaints were brought by the FTC under the FTC Act, under which there is no private right of action.

The FTC’s December 3, 2024 Orders

In Intellivision, the FTC brought an enforcement action against a developer of AI-based facial recognition software embedded in home security products to enable consumers to gain access to their home security systems.  According to the complaint, the developer described its facial recognition software publicly as being entirely free of any gender or racial bias as shown by rigorous testing when, in fact, testing by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) showed that the software was not among the top 100 best performing algorithms tested by NIST in terms of error rates across different demographics, including region of birth and sex.  (Compl. ¶ 11.)  Moreover, according to the FTC, the developer did not possess any of its own testing to support its claims of lack of bias.  Based on these allegations, the FTC brought misrepresentation claims under the FTC Act.  The parties agreed to a consent order, in which the developer agreed to refrain from making any representations about the accuracy, efficacy, or lack of bias of its facial recognition technology, unless it could first substantiate such claims with reliable testing and documentation as set forth in the consent order.  The consent order also requires the developer to communicate the order to any of its managers and affiliated companies in the next 20 years, to make timely compliance reports and notices, and to create and maintain various detailed records, including regarding the company’s accounting, personnel, consumer complaints, compliance, marketing, and testing.

In Mobilewalla and Gravy Analytics, the FTC brought enforcement actions against data brokers who allegedly obtained consumer location data from other data suppliers and mobile applications and sold access to this data for purposes of online advertising without consumers’ consent.  According to the FTC’s complaints, the data brokers engaged in unfair collection, sale, use, and retention of sensitive location information, all in alleged violation of the FTC Act.  The parties agreed to consent orders, in which the data brokers agreed to refrain from collecting, selling, using, and retaining sensitive location information; to establish a Sensitive Location Data Program, Supplier Assessment Program, and a comprehensive privacy program, as detailed in the orders; provide consumers clear and conspicuous notice; provide consumers a means to request data deletion; delete location data as set forth in the order; and perform compliance, recordkeeping, and other activities, as set forth in the order.

Implications For Companies

The FTC’s increased enforcement activities in the areas of adtech and AI serve as a cautionary tale for companies using adtech and AI. 

As the FTC’s recent rulings and its 2024 dockets show, the FTC is increasingly using the FTC Act as a sword against alleged unfair use of adtech and AI.  Moreover, although the December 3 orders do not expressly impose any monetary penalties, the injunctive relief they impose may be costly and, in other FTC consent orders, harsher penalties have included express penalties of millions of dollars and, further, algorithmic disgorgement.  As adtech and AI continue to proliferate, organizations should consider in light of the FTC’s increased enforcement activities in these areas—and in light of the plaintiffs’ class action bar’s and EEOC’s increased activities in these areas, as well, as we blogged about here, here, here, here, and here—whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI and adtech in additional detail.  Doing so could deter or help defend a future enforcement action or class action similar to the many that are being filed today, alleging omission of such additional details, and seeking a wide range of injunctive and monetary relief.

Illinois Federal Court Dismisses Class Action Privacy Claims Involving Use Of Samsung’s “Gallery” App

By Tyler Zmick, Justin Donoho, and Gerald L. Maatman, Jr.

Duane Morris Takeaways:  In G.T., et al. v. Samsung Electronics America, Inc., et al., No. 21-CV-4976, 2024 WL 3520026 (N.D. Ill. July 24, 2024), Judge Lindsay C. Jenkins of the U.S. District Court for the Northern District of Illinois dismissed claims brought under the Illinois Biometric Information Privacy Act (“BIPA”).  In doing so, Judge Jenkins acknowledged limitations on the types of conduct (and types of data) that can subject a company to liability under the statute.  The decision is welcome news for businesses that design, sell, or license technology yet do not control or store any “biometric” data that may be generated when customers use the technology.  The case also reflects the common sense notion that a data point does not qualify as a “biometric identifier” under the BIPA if it cannot be used to identify a specific person.  G.T. v. Samsung is required reading for corporate counsel facing privacy class action litigation.

Background

Plaintiffs — a group of Illinois residents who used Samsung smartphones and tablets — alleged that their respective devices came pre-installed with a “Gallery application” (the “App”) that can be used to organize users’ photos.  According to Plaintiffs, whenever an image is created on a Samsung device, the App automatically: (1) scans the image to search for faces using Samsung’s “proprietary facial recognition technology”; and (2) if it detects a face, the App analyzes the face’s “unique facial geometry” to create a “face template” (i.e., “a unique digital representation of the face”).  Id. at *2.  The App then organizes photos based on images with similar face templates, resulting in “pictures with a certain individual’s face [being] ‘stacked’ together on the App.”  Id.

Based on their use of the devices, Plaintiffs alleged that Samsung violated §§ 15(a) and 15(b) of the BIPA by: (1) failing to develop a written policy made available to the public establishing a retention policy and guidelines for destroying biometric data, and (2) collecting Plaintiffs’ biometric data without providing them with the requisite notice and obtaining their written consent.

Samsung moved to dismiss on two grounds, arguing that: (1) Plaintiffs did not allege that Samsung “possessed” or “collected” their biometric data because they did not claim the data ever left their devices; and (2) Plaintiffs failed to allege that data generated by the App qualifies as “biometric identifiers” or “biometric information” under the BIPA, because Samsung cannot use the data to identify Plaintiffs or others appearing in uploaded photos.

The Court’s Decision

The Court granted Samsung’s motion to dismiss on both grounds.

“Possession” And “Collection” Of Biometric Data

Regarding Samsung’s first argument, the Court began by explaining what it means for an entity to be “in possession of” biometric data under § 15(a) and to “collect” biometric data under § 15(b).  The Court observed that “possession” occurs when an entity exercises control over data or holds it at its disposal.  Regarding “collection,” the Court noted that the term “collect,” and the other verbs used in § 15(b) (“capture, purchase, receive through trade, or otherwise obtain”), all refer to an entity taking an “active step” to gain control of biometric data.

The Court proceeded to consider Plaintiffs’ contention that Samsung was “in possession of” their biometrics because Samsung controls the proprietary software used to operate the App.  The Court sided with Samsung, however, concluding that Plaintiffs failed to allege “possession” (and thus failed to state a § 15(a) claim) because they did not allege that Samsung can access the data (as opposed to the technology Samsung employs).  Id. at *9 (“Samsung controls the App and its technology, but it does not follow that this control gives Samsung dominion over the Biometrics generated from the App, and plaintiffs have not alleged Samsung receives (or can receive) such data.”).

As for § 15(b), the Court rejected Plaintiffs’ argument that Samsung took an “active step” to “collect” their biometrics by designing the App to “automatically harvest[] biometric data from every photo stored on the Device.”  Id. at *11.  The Court determined that Plaintiffs’ argument failed for the same reason their § 15(a) “possession” argument failed.  Id. at *11-12 (“Plaintiffs’ argument again conflates technology with Biometrics. . . . Plaintiffs do not argue that Samsung possesses the Data or took any active steps to collect it.  Rather, the active step according to Plaintiffs is the creation of the technology.”).

“Biometric Identifiers” And “Biometric Information”

The Court next turned to Samsung’s second argument for dismissal – namely, that Plaintiffs failed to allege that data generated by the App is “biometric” under the BIPA because Samsung could not use it to identify Plaintiffs (or others appearing in uploaded photos).

In opposing this argument, Plaintiffs asserted that: (1) the “App scans facial geometry, which is an explicitly enumerated biometric identifier”; and (2) the “mathematical representations of face templates” stored through the App constitute “biometric information” (i.e., information “based on” scans of Plaintiffs’ “facial geometry”).  Id. at *13.

The Court ruled that “Samsung has the better argument,” holding that Plaintiffs’ claims failed because Plaintiffs did not allege that Samsung can use data generated through the App to identify specific people.  Id. at *15.  The Court acknowledged that cases are split “on whether a plaintiff must allege a biometric identifier can identify a particular individual, or if it is sufficient to allege the defendant merely scanned, for example, the plaintiff’s face or retina.”  Id. at *13.  After employing relevant principles of statutory interpretation, the Court sided with the cases in the former category and opined that “the plain meaning of ‘identifier,’ combined with the BIPA’s purpose, demonstrates that only those scans that can identify an individual qualify.”  Id. at *15.

Turning to the facts alleged in the Complaint, the Court concluded that Plaintiffs failed to state claims under the BIPA because the data generated by the App does not amount to “biometric identifiers” or “biometric information” simply because the data can be used to identify and group the unique faces of unnamed people.  In other words, biometric information must be capable of recognizing an individual’s identity – “not simply an individual’s feature.”  Id. at *17; see also id. at *18 (noting that Plaintiffs claimed only that the App groups unidentified faces together, and that it is the device user who can add names or other identifying information to the faces).

Implications Of The Decision

G.T. v. Samsung is one of several recent decisions grappling with key questions surrounding the BIPA, including questions as to: (1) when an entity engages in conduct that rises to the level of “possession” or “collection” of biometrics; and (2) what data points qualify (and do not qualify) as “biometric identifiers” and “biometric information” such that they are subject to regulation under the statute.

Regarding the first question, the Samsung case reflects the developing majority position among courts – i.e., a company is not “in possession of,” and has not “collected,” data that it does not actually receive or access, even if it created and controlled the technology that generated the allegedly biometric data.

As for the second question, the Court’s decision in Samsung complements the Ninth Circuit’s recent decision in Zellmer v. Meta Platforms, Inc., where it held that a “biometric identifier” must be capable of identifying a specific person.  See Zellmer v. Meta Platforms, Inc., 104 F.4th 1117, 1124 (9th Cir. 2024) (“Reading the statute as a whole, it makes sense to impose a similar requirement on ‘biometric identifier,’ particularly because the ability to identify did not need to be spelled out in that term — it was readily apparent from the use of ‘identifier.’”).  Courts have not uniformly endorsed this reading, however, and parties will likely continue litigating the issue unless and until the Illinois Supreme Court provides the final word on what counts as a “biometric identifier” and “biometric information.”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress