Federal Court Holds Illinois Genetic Privacy Claim Not Preempted By Federal Transportation Regulations

By Justin Donoho, Gerald L. Maatman, Jr., and Tyler Zmick

Duane Morris Takeaways:  In Short v. MV Transportation, Inc., No. 24-CV-3019 (N.D. Ill. Mar. 10, 2025), Judge Manish S. Shah of the U.S. District Court for the Northern District of Illinois denied defendant’s bid to dismiss a claim brought under the Illinois Genetic Information Privacy Act (“GIPA”).  In his ruling, Judge Shah acknowledged that U.S. Department of Transportation regulations require companies in the transportation industry (including defendant) to ensure their drivers satisfy certain physical qualification criteria.  The Court nonetheless rejected defendant’s argument that the regulations preempt the GIPA because they do not specifically require employers to ask applicants about their family medical histories (which the GIPA prohibits).  In other words, the Court denied defendant’s motion to dismiss because the GIPA does not make it “physically impossible” to comply with federal regulations. 

Background

Plaintiff Kevin Short alleged that he applied for a position as a driver for Defendant MV Transportation, Inc., a company that provides paratransit services.  As part of the application process, Plaintiff was required to complete a physical examination during which he was asked about his family medical history, including whether his family members had a history of high blood pressure, heart disease, or diabetes.

Plaintiff subsequently sued MV Transportation under the GIPA, alleging that the company violated Section 25(c)(1) of the statute by “solicit[ing], request[ing], [or] requir[ing] . . . genetic information of a person or a family member of the person . . . as a condition of employment [or] preemployment application.”  410 ILCS 513/25(c)(1).

MV Transportation moved to dismiss the complaint on the basis that the Department of Transportation’s (“DOT”) regulations preempted Plaintiff’s GIPA claim.  Specifically, MV Transportation argued that Plaintiff’s claim was barred under a “conflict preemption” theory because allowing the claim to proceed would force MV Transportation to choose between complying with the GIPA or complying with federal requirements to “conduct[ ] thorough physical examinations of its drivers.”

MV Transportation pointed to the Motor Carrier Safety Act for support, under which the DOT regulates commercial motor vehicle safety by promulgating “minimum safety standards” to ensure that “the physical condition of operators . . . is adequate to enable them to operate the vehicles safely” – including by requiring drivers to satisfy 13 “physical qualification criteria.”  49 U.S.C. § 31136(a)(3).

The Court’s Decision

In denying MV Transportation’s motion, the Court noted that conflict preemption applies only where “compliance with both federal and state regulations is a physical impossibility” or where the state law “stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress.”  Id. at 6-7 (citations omitted); see also id. at 6 (noting that “‘[i]nvoking some brooding federal interest’ is insufficient to establish preemption; instead, MV Transportation must identify ‘a constitutional text or a federal statute’ that displaces or conflicts with the state law”) (quoting Virginia Uranium, Inc. v. Warren, 587 U.S. 761, 767 (2019)).  The Court further observed that MV Transportation had the burden of overcoming the “presumption against preemption.”

In its ruling, the Court concluded that it is not physically impossible for MV Transportation to simultaneously comply with the GIPA and DOT regulations relative to Plaintiff’s pre-employment health screening because the DOT regulations do not specifically require any inquiry into a driver’s family medical history.  MV Transportation asserted that DOT regulations nonetheless “contemplate[] that medical examiners may discuss” a person’s family medical history during a physical exam.  The Court was not persuaded, however, stating that such a scenario is “not enough to suggest that compliance with GIPA and the federal regulations is ‘physically impossible.’”  Id. at 9 (“The mere possibility that a medical examiner asks for information protected by GIPA while performing an examination does not demonstrate impossibility to comply with both federal and state law.”). 

The Court similarly held that the GIPA is not an obstacle to the execution of Congress’s purposes, as reflected in the Motor Carrier Safety Act and DOT regulations.  As support for this conclusion, the Court observed that the relevant DOL regulations and the GIPA serve different purposes – the regulations are meant to promote the safe operation of commercial motor vehicles, while the GIPA focuses on health information privacy. 

Implications Of The Decision

Short v. MV Transportation is one of several recent decisions in which courts denied bids to dismiss GIPA claims at the pleading stage. 

Given this litigation landscape and the statute’s strict penalty provision – under which statutory damages can quickly become significant ($2,500 per negligent violation and $15,000 per intentional or reckless violation, see 410 ILCS 513/40(a)(1)-(2)) – employers should ensure they comply with the statute regarding any health screenings they ask applicants or employees to complete (including by explicitly advising applicants and employees not to disclose their family medical histories during the screenings).

It’s Here! The Duane Morris Privacy Class Action Review – 2025

By Gerald L. Maatman, Jr., Jennifer A. Riley, Alex W. Karasik, Gregory Tsonis, Justin Donoho, and Tyler Zmick

Duane Morris Takeaways: The last year saw a virtual explosion in privacy class action litigation. As a result, compliance with privacy laws in the myriad of ways that companies interact with employees, customers, and third parties is a corporate imperative. To that end, the class action team at Duane Morris is pleased to present the second edition of the Privacy Class Action Review – 2025. This publication analyzes the key privacy-related rulings and developments in 2024 and the significant legal decisions and trends impacting privacy class action litigation for 2025. We hope that companies and employers will benefit from this resource in their compliance with these evolving laws and standards.

Click here to bookmark or download a copy of the Privacy Class Action Review – 2025 e-book. Look forward to an episode on the Review coming soon on the Class Action Weekly Wire!

Ninth Circuit Dismisses Adtech Class Action For Lack Of Standing

By Gerald L. Maatman, Jr. and Justin Donoho

Duane Morris Takeaways:  On December 17, 2024, in Daghaly, et al. v. Bloomingdales.com, LLC, No. 23-4122, 2024 WL 5134350 (9th Cir. Dec. 17, 2024), the Ninth Circuit ruled that a plaintiff lacked Article III standing to bring her class action complaint alleging that an online retailer’s use of website advertising technology disclosed website visitors’ browsing activities in violation of the California Invasion of Privacy Act and other statutes.  The ruling is significant because it shows that adtech claims cannot be brought in federal court without specifying the plaintiffs’ web browsing activities allegedly disclosed. 

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today.

In Daghaly, Plaintiff brought suit against an online retailer.  According to Plaintiff, the retailer installed the Meta Pixel and other adtech on its public-facing website and thereby transmitted web-browsing information entered by visitors such as which products the visitor clicked on and whether the visitor added the product to his or her shopping cart or wish list.  Id., No. 23-CV-129, ECF No. 1 ¶¶ 44-45.  As for Plaintiff herself, she did not allege what she clicked on or what her web browsing activities entailed upon visiting the website, only that she accessed the website via the web browser on her phone and computer.  Id. ¶ 40.

Based on these allegations, Plaintiff alleged claims for violation of the California Invasion of Privacy Act (CIPA) and other statutes.  The district court dismissed the complaint for lack of personal jurisdiction.  Id., 697 F. Supp. 3d 996 (S.D. Cal. 2023).  Plaintiff appealed and, in its appellate response brief, the retailer argued for the first time that Plaintiff lacked Article III standing.

The Ninth Circuit’s Opinion

The Ninth Circuit agreed with the retailer, found that Plaintiff lacked standing, and remanded for further proceedings.

To allege Article III standing, as is required to bring suit in federal court, the Ninth Circuit opined that a plaintiff must “clearly allege facts demonstrating” that she “suffered an injury in fact that is concrete, particularized, and actual or imminent.”  Id., 2024 WL 5134350, at *2 (citing, e.g., TransUnion LLC v. Ramirez, 594 U.S. 413, 423 (2021)). 

Plaintiff argued that she sufficiently alleged standing via her allegations that she “visited” and “accessed” the website and was “subjected to the interception of her Website Communications.”  Id. at *1.  Moreover, Plaintiff argued, the retailer’s alleged disclosure to adtech companies of the fact of her visiting the retailer’s website sufficiently alleged an invasion of her privacy and thereby invoked Article III standing because the adtech companies could use this fact to stitch together a broader, composite picture of Plaintiffs’ online activities.  See oral argument, here.

The Ninth Circuit rejected these arguments. It found that Plaintiff “does not allege that she herself actually made any communications that could have been intercepted once she had accessed the website. She does not assert, for example, that she made a purchase, entered text, or took any actions other than simply opening the webpage and then closing it.”  Id., 2024 WL 5134350, at *1.As the Ninth Circuit explained during oral argument by way of example, it is not like the Plaintiff had alleged that she was shopping for underwear and that the retailer transmitted information about her underwear purchases.  Moreover, the Ninth Circuit found “no authority suggesting that the fact that she visited [the retailer’s website] (as opposed to information she might have entered while using the website) constitutes ‘contents’ of a communication within the meaning of CIPA Section 631.”  Id.

In short, the Ninth Circuit concluded that Plaintiff lacked Article III standing, and that this conclusion followed from Plaintiff’s failure to sufficiently allege the nature her web browsing activities giving rise to all of her statutory claims.  Id. at *2.  The Ninth Circuit remanded with instructions that the district court grant leave to amend if properly requested. 

Implications For Companies

The holding of Daghaly is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have found that an adtech plaintiff’s failure to identify what allegedly private information allegedly was disclosed via the adtech warrants dismissal under Rule 12(b)(6) for failure to plausibly plead various statutory and common-law claims.  See, e.g, our blog post about such a decision here.   Daghaly shows that adtech plaintiffs also need to identify what allegedly private information beyond the fact of a visit to an online retailer’s website was allegedly disclosed via the adtech, in order to have Article III standing to bring their federal lawsuit in the first place.

Florida Federal Court Refuses To Certify Adtech Class Action

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan K. Norimoto

Duane Morris Takeaways:  On October 1, 2024, Judge Robert Scola of the U.S. District Court for the Southern District of Florida denied class certification in a case involving website advertising technology (“adtech”) in Martinez v. D2C, LLC, 2024 WL 4367406 (S.D. Fla. Oct. 1, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of adtech cannot satisfy Rule 23’s numerosity requirement merely by showing the presence of adtech on a website and numerous visitors to that website.  The Court’s reasoning in denying class certification applies not only in adtech cases raising claims brought under the Video Privacy Protection Act (“VPPA”), like this one, but also to other adtech cases raising a wide variety of other statutory and common law legal theories.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on millions of corporate, governmental, and other websites in operation today.

In Martinez, the plaintiffs brought suit against D2C, LLC d/b/a Univision NOW (“Univision”), an online video-streaming service.  The parties did not dispute, at least for the purposes of class certification, that: (A) Univision installed the Meta Pixel on its video-streaming website; (B) Univision was a “video tape service provider” and the plaintiffs and other Univision subscribers were “consumers” under the VPPA, thereby giving rise to liability under that statute if the plaintiffs could show Univision transmitted their personally identifiable information (PII) such as their Facebook IDs along with the videos they accessed to Meta without their consent; (C) none of the plaintiffs consented; and (D) 35,845 subscribers viewed at least one video on Univision’s website.  Id. at *2. 

The plaintiffs moved for class certification under Rule 23.  The plaintiffs maintained that that at least 17,000 subscribers, including (or in addition to) them, had their PII disclosed to Meta by Univision.  Id. at *3.  The plaintiffs reached this number upon acknowledging “at least two impediments to a subscriber’s viewing information’s being transmitted to Meta: (1) not having a Facebook account; and (2) using a browser that, by default, blocks the Pixel.”  Id. at *6.  Thus, the plaintiffs pointed to “statistics regarding the percentage of people in the United States who have Facebook accounts (68%) and the testimony of their expert … regarding the percentage of the population who use a web browser that would not block the Pixel transmission (70%), to conclude, using ‘basic math,’ that the class would be comprised of ‘at least approximately 17,000 individuals.’” Id. at *6.In contrast, Univision maintained that the plaintiffs failed to carry their burden of showing that even a single subscriber had their PII disclosed, including the three named plaintiffs.  Id. at *3.

The Court’s Decision

The Court agreed with Univision and held that the plaintiffs did not carry their burden of showing numerosity.

First, the Court held that the plaintiffs’ reliance on statistics regarding percentage of people who have Facebook accounts was unhelpful, because “being logged in to Facebook”—not just having an account—“is a prerequisite to the Pixel disclosing information.”  Id. at *7 (emphasis in original).  Moreover, “being simultaneously logged in to Facebook is still not enough to necessarily prompt a Pixel transmission: a subscriber must also have accessed the prerecorded video on Univision’s website through the same web browser and device through which the subscriber (and not another user) was logged into Facebook.”  Id.

Second, the Court held that the plaintiffs’ reliance on their proffer that 70% of people use Google Chrome and Microsoft Edge, which allow Pixel transmission “under default configurations,” failed to account for all of the following “actions a user can take that would also block any Pixel transmission to Meta: enabling a browser’s third-party cookie blockers; setting a browser’s cache to ‘self-destruct’; clearing cookies upon the end of a browser session; and deploying add-on software that blocks third-party cookies.”  Id.

In short, the Court reasoned that the plaintiffs did not establish “the means to make a supported factual finding, that the class to be certified meets the numerosity requirement.”  Id. at *9.  Moreover, the Court found that the plaintiffs had not demonstrated that “any” PII had been disclosed, including their own.  Id. (emphasis in original).In reply, the plaintiffs attempted to introduce evidence supplied by Meta that one of the plaintiffs’ PII had been transmitted to Meta.  Id.  The court refused to consider this new information, supplied for the first time on reply, and further found that even if it were to consider the new evidence, “this only gets the Plaintiffs to one ‘class member.’”  Id. at *10 (emphasis in original).

Finding the plaintiffs’ failure to satisfy the numerosity requirement dispositive, the Court declined to evaluate the other Rule 23 factors.  Id. at *5.

Implications For Companies

This case is a win for defendants of adtech class actions.  In such cases, the Martinez decision can be cited as useful precedent for showing that the numerosity requirement is not met where plaintiffs put forth only speculative evidence as to whether the adtech disclosed plaintiffs’ and alleged class members’ PII to third parties.  The Court’s reasoning in Martinez applies not only in VPPA cases but also other adtech cases alleging claims for invasion of privacy, under state and federal wiretap acts, and more.  All these legal theories have adtech’s transmission of the PII to third parties as a necessary element.  In sum, to establish numerosity, plaintiffs must demonstrate, at a minimum, that class members were logged into their own adtech accounts at the time they visited the defendants’ website, using the same device and browser for the adtech and the visit, using a browser that did not block the transmission by default, and not deploying any number of browser settings and add-on software that would have blocked the transmission.

Georgia Federal Court Dismisses Data Privacy Class Action Against Healthcare Company For Failure To Sufficiently Allege Any Invasion Of Privacy, Damages, Or Wiretap Violation

By Gerald L. Maatman, Jr., Justin Donoho, and Ryan T. Garippo

Duane Morris Takeaways:  On August, 2024, in T.D. v. Piedmont Healthcare, Inc., No. 23-CV-5416 (N.D. Ga. Aug. 24, 2024), Judge Thomas Thrash of the U.S. District Court for the Northern District of Georgia dismissed in its entirety a class action complaint alleging that a healthcare company’s use of website advertising technology installed in its MyChart patient portal disclosed the plaintiffs’ private information in commission of the common law torts of invasion of privacy, breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, and in violation of the Federal Wiretap Act.  The ruling is significant because it shows that such claims cannot surmount Rule 12(b)(6)’s plausibility standard for legal reasons broadly applicable to a wide range of adtech class actions currently on file in many jurisdictions across the nation.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  As the Court explained, “cases like this have sprouted like weeds in recent years.”  Id. at 5.

In Piedmont, Plaintiffs brought suit against Piedmont Healthcare, Inc. (“Piedmont”).  According to Plaintiffs, Piedmont installed the Meta Pixel on its public-facing website and its secure patient portal, and thereby transmitted to Meta Plaintiffs’ “personally identifiable information (PII) and protected health information (PHI) without their consent.” Id. at 1-2.

Based on these allegations, Plaintiffs alleged claims for invasion of privacy, breach of fiduciary duty, negligence, breach of contract, unjust enrichment, and violation of the Electronic Communications Privacy Act (“ECPA”).  Piedmont moved to dismiss under Rule 12(b)(6) for failure to state sufficient facts that, if accepted as true, would state a claim for relief that is plausible on its face.

The Court’s Opinion

The Court agreed with Piedmont and dismissed all of Plaintiffs’ claims.

To state a claim for invasion of privacy, Plaintiffs were required to allege facts sufficient to show “an unreasonable and highly offensive intrusion upon another’s seclusion.”  Id. at 5.  Plaintiffs argued that Piedmont intruded upon their privacy by using the Meta Pixel to secretly transmit their PII and PHI to a third party for commercial gain.  Id. at 4.  Piedmont argued that these allegations failed to plausibly plead an intrusion or actionable intent, or that any intrusion was reasonably offensive or objectionable.  Id.  The Court concluded that “it seems that the weight of authority in similar pixel tracking cases is now solidly in favor of Piedmont’s argument. There is no intrusion upon privacy when a patient voluntarily provides personally identifiable information and protected health information to his or her healthcare provider.”  Id. at 5-6 (collecting cases).  The Court further commented that “it is widely understood that when browsing websites, your behavior may be tracked, studied, shared, and monetized. So it may not come as much of a surprise when you see an online advertisement for fertilizer shortly after searching for information about keeping your lawn green.”  Id. at 3-4.

To state claims for breach of fiduciary duty, negligence, breach of contract, and unjust enrichment, one of the elements a plaintiff much allege is damages or, relatedly, enrichment.  Id. at 7-10.  Plaintiffs argued that they alleged seven categories of damages, as follows: “(i) invasion of privacy, including increased spam and targeted advertising they did not ask for; (ii) loss of confidentiality; (iii) embarrassment, emotional distress, humiliation and loss of enjoyment of life; (iv) lost time and opportunity costs associated with attempting to mitigate the consequences of the disclosure of their Private Information; (v) loss of benefit of the bargain; (vi) diminution of value of Private Information and (vii) the continued and ongoing risk to their Private Information.”  Id. at 9.  Piedmont argued that these damages theories stemming from “the provision of encrypted information only to Facebook” were implausible.  Id. at 7.  The Court agreed with Piedmont, rejected all of Plaintiffs’ damages theories.  Accordingly, it dismissed the remainder of Plaintiffs’ common-law claims.  As the Court explained: “No facts are alleged that would explain how receiving targeted advertisements from Facebook and Piedmont would plausibly cause any of the Plaintiffs to suffer these damages. This is not a case where the Plaintiffs’ personal information was stolen by criminal hackers with malicious intent. The Plaintiffs received targeted advertisements because they are Facebook users and have Facebook IDs. The Court finds the Plaintiffs’ damages theories untenable. Indeed, this court has rejected many identical theories arising under similar circumstances.”  Id. (collecting cases)

To state a claim for violation of the ECPA, also known as the federal wiretap act, a plaintiff must show an intentional interception of the contents of an electronic communication.  Id. at 11.  The ECPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication “unless such communication is intercepted for the purposes of committing a criminal or tortious act in violation of the Constitution or laws of the United States or any State.”  18 U.S.C. § 2511(2)(d)); 18 U.S.C. § 2511(2)(d).  Piedmont argued that it could not have intercepted the same transmission it received on its website, nor could it have acted with a tortious or criminal purpose in seeking to drive marketing and revenue.  Id. at 10-11.  In response, the Plaintiffs contended that they stated a plausible ECPA claim, arguing that Piedmont intercepted the contents of their PII and PHI when it acquired such information through the Meta Pixel on its website and that the party exception is inapplicable because Piedmont acted with criminal and tortious intent in “wiretapping” their PII and PHI.  Id. at 11.  The Court concisely concluded: “As was the case in the invasion of privacy context, the weight of persuasive authority in similar pixel tracking cases supports Piedmont’s position.”  Id. at 11-12 (collecting cases).

Implications For Companies

The holding of Piedmont is a win for adtech class action defendants and should be instructive for courts around the country.  While many adtech cases around the country have made it past a motion to dismiss, many have not, and, for many which continue to be filed regularly, it remains to be seen, Piedmont provides powerful precedent for any company defending against adtech class action claims for invasion of privacy, common-law claims for damages or unjust enrichment, and alleged violation of the federal wiretap act.

Illinois Federal Court Dismisses Class Action Privacy Claims Involving Use Of Samsung’s “Gallery” App

By Tyler Zmick, Justin Donoho, and Gerald L. Maatman, Jr.

Duane Morris Takeaways:  In G.T., et al. v. Samsung Electronics America, Inc., et al., No. 21-CV-4976, 2024 WL 3520026 (N.D. Ill. July 24, 2024), Judge Lindsay C. Jenkins of the U.S. District Court for the Northern District of Illinois dismissed claims brought under the Illinois Biometric Information Privacy Act (“BIPA”).  In doing so, Judge Jenkins acknowledged limitations on the types of conduct (and types of data) that can subject a company to liability under the statute.  The decision is welcome news for businesses that design, sell, or license technology yet do not control or store any “biometric” data that may be generated when customers use the technology.  The case also reflects the common sense notion that a data point does not qualify as a “biometric identifier” under the BIPA if it cannot be used to identify a specific person.  G.T. v. Samsung is required reading for corporate counsel facing privacy class action litigation.

Background

Plaintiffs — a group of Illinois residents who used Samsung smartphones and tablets — alleged that their respective devices came pre-installed with a “Gallery application” (the “App”) that can be used to organize users’ photos.  According to Plaintiffs, whenever an image is created on a Samsung device, the App automatically: (1) scans the image to search for faces using Samsung’s “proprietary facial recognition technology”; and (2) if it detects a face, the App analyzes the face’s “unique facial geometry” to create a “face template” (i.e., “a unique digital representation of the face”).  Id. at *2.  The App then organizes photos based on images with similar face templates, resulting in “pictures with a certain individual’s face [being] ‘stacked’ together on the App.”  Id.

Based on their use of the devices, Plaintiffs alleged that Samsung violated §§ 15(a) and 15(b) of the BIPA by: (1) failing to develop a written policy made available to the public establishing a retention policy and guidelines for destroying biometric data, and (2) collecting Plaintiffs’ biometric data without providing them with the requisite notice and obtaining their written consent.

Samsung moved to dismiss on two grounds, arguing that: (1) Plaintiffs did not allege that Samsung “possessed” or “collected” their biometric data because they did not claim the data ever left their devices; and (2) Plaintiffs failed to allege that data generated by the App qualifies as “biometric identifiers” or “biometric information” under the BIPA, because Samsung cannot use the data to identify Plaintiffs or others appearing in uploaded photos.

The Court’s Decision

The Court granted Samsung’s motion to dismiss on both grounds.

“Possession” And “Collection” Of Biometric Data

Regarding Samsung’s first argument, the Court began by explaining what it means for an entity to be “in possession of” biometric data under § 15(a) and to “collect” biometric data under § 15(b).  The Court observed that “possession” occurs when an entity exercises control over data or holds it at its disposal.  Regarding “collection,” the Court noted that the term “collect,” and the other verbs used in § 15(b) (“capture, purchase, receive through trade, or otherwise obtain”), all refer to an entity taking an “active step” to gain control of biometric data.

The Court proceeded to consider Plaintiffs’ contention that Samsung was “in possession of” their biometrics because Samsung controls the proprietary software used to operate the App.  The Court sided with Samsung, however, concluding that Plaintiffs failed to allege “possession” (and thus failed to state a § 15(a) claim) because they did not allege that Samsung can access the data (as opposed to the technology Samsung employs).  Id. at *9 (“Samsung controls the App and its technology, but it does not follow that this control gives Samsung dominion over the Biometrics generated from the App, and plaintiffs have not alleged Samsung receives (or can receive) such data.”).

As for § 15(b), the Court rejected Plaintiffs’ argument that Samsung took an “active step” to “collect” their biometrics by designing the App to “automatically harvest[] biometric data from every photo stored on the Device.”  Id. at *11.  The Court determined that Plaintiffs’ argument failed for the same reason their § 15(a) “possession” argument failed.  Id. at *11-12 (“Plaintiffs’ argument again conflates technology with Biometrics. . . . Plaintiffs do not argue that Samsung possesses the Data or took any active steps to collect it.  Rather, the active step according to Plaintiffs is the creation of the technology.”).

“Biometric Identifiers” And “Biometric Information”

The Court next turned to Samsung’s second argument for dismissal – namely, that Plaintiffs failed to allege that data generated by the App is “biometric” under the BIPA because Samsung could not use it to identify Plaintiffs (or others appearing in uploaded photos).

In opposing this argument, Plaintiffs asserted that: (1) the “App scans facial geometry, which is an explicitly enumerated biometric identifier”; and (2) the “mathematical representations of face templates” stored through the App constitute “biometric information” (i.e., information “based on” scans of Plaintiffs’ “facial geometry”).  Id. at *13.

The Court ruled that “Samsung has the better argument,” holding that Plaintiffs’ claims failed because Plaintiffs did not allege that Samsung can use data generated through the App to identify specific people.  Id. at *15.  The Court acknowledged that cases are split “on whether a plaintiff must allege a biometric identifier can identify a particular individual, or if it is sufficient to allege the defendant merely scanned, for example, the plaintiff’s face or retina.”  Id. at *13.  After employing relevant principles of statutory interpretation, the Court sided with the cases in the former category and opined that “the plain meaning of ‘identifier,’ combined with the BIPA’s purpose, demonstrates that only those scans that can identify an individual qualify.”  Id. at *15.

Turning to the facts alleged in the Complaint, the Court concluded that Plaintiffs failed to state claims under the BIPA because the data generated by the App does not amount to “biometric identifiers” or “biometric information” simply because the data can be used to identify and group the unique faces of unnamed people.  In other words, biometric information must be capable of recognizing an individual’s identity – “not simply an individual’s feature.”  Id. at *17; see also id. at *18 (noting that Plaintiffs claimed only that the App groups unidentified faces together, and that it is the device user who can add names or other identifying information to the faces).

Implications Of The Decision

G.T. v. Samsung is one of several recent decisions grappling with key questions surrounding the BIPA, including questions as to: (1) when an entity engages in conduct that rises to the level of “possession” or “collection” of biometrics; and (2) what data points qualify (and do not qualify) as “biometric identifiers” and “biometric information” such that they are subject to regulation under the statute.

Regarding the first question, the Samsung case reflects the developing majority position among courts – i.e., a company is not “in possession of,” and has not “collected,” data that it does not actually receive or access, even if it created and controlled the technology that generated the allegedly biometric data.

As for the second question, the Court’s decision in Samsung complements the Ninth Circuit’s recent decision in Zellmer v. Meta Platforms, Inc., where it held that a “biometric identifier” must be capable of identifying a specific person.  See Zellmer v. Meta Platforms, Inc., 104 F.4th 1117, 1124 (9th Cir. 2024) (“Reading the statute as a whole, it makes sense to impose a similar requirement on ‘biometric identifier,’ particularly because the ability to identify did not need to be spelled out in that term — it was readily apparent from the use of ‘identifier.’”).  Courts have not uniformly endorsed this reading, however, and parties will likely continue litigating the issue unless and until the Illinois Supreme Court provides the final word on what counts as a “biometric identifier” and “biometric information.”

California Federal Court Denies Motion To Dismiss Artificial Intelligence Employment Discrimination Lawsuit

By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller

Duane Morris Takeaways:  In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80)Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.  

This development follows Workday’s first successful Motion to Dismiss, which we blogged about here, and the EEOC’s amicus brief filing, which we blogged on here

Case Background

Plaintiff is an African American male over the age of 40, with a bachelor’s degree in finance from Morehouse College, an all-male Historically Black College and University, and an honors graduate degree. Id. at 2. Plaintiff also alleges he suffered from anxiety and depression.  Since 2017, Plaintiff applied to over 100 jobs with companies that use Workday’s screening tools.  In many applications, Plaintiff alleges he was required to take a “Workday-branded assessment and/or personality test.”  Plaintiff asserts these assessments “likely . . . reveal mental health disorders or cognitive impairments,” so others who suffer from anxiety and depression are “likely to perform worse  … and [are] screened out.”  Id. at 2-3.  Plaintiff was allegedly denied employment through Workday’s platform across all submitted applications.

Plaintiff alleges Workday’s algorithmic decision-making tools discriminate against job applicants who are African-American, over the age of 40, and/or are disabled.  Id. at 3.  In support of these allegations, Plaintiff claims that in one instance, he applied for a position at 12:55 a.m. and his application was rejected less than an hour later.  Plaintiff brought claims under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (“Section 1981”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the ADA Amendments Act of 2008 (“ADA”), for intentional discrimination on the basis of race and age, and disparate impact discrimination on the basis of race, age, and disability. Plaintiff also brings a claim for aiding and abetting race, disability, and age discrimination against Workday under California’s Fair Employment and Housing Act (“FEHA”).  Workday moved to dismiss, where Plaintiff’s opposition was supported by an amicus brief filed by the EEOC.

The Court’s Decision

The Court granted in part and denied in part Workday’s motion to dismiss.  At the outset of its opinion, the Court noted that Plaintiff alleged Workday was liable for employment discrimination, under Title VII, the ADEA, and the ADA, on three theories: as an (1) employment agency; (2) agent of employers; and (3) an indirect employer. Id. at 5.

The Court opined that relevant statute prohibits discrimination “not just by employers but also by agents of those employers,” so an employer cannot “escape liability for discrimination by delegating [] traditional functions, like hiring, to a third party.”  Id.  Therefore, an employer’s agent can be independently liable when the employer has delegated to the agent “functions [that] are traditionally exercised by the employer.”  Id.

In regards to the “employment agency” theory, the Court reasoned employment agencies “procure employees for an employer” – meaning – “they find candidates for an employer’s position; they do not actually employ those employees.”  Id. at 7.  The Court further reasoned employment agencies are liable when they “fail or refuse to refer” individuals for consideration by employers on prohibited bases.  Id. The Court held Plaintiff did not sufficiently allege Workday finds employees for employers such that Workday is an employment agency.  Accordingly, the Court granted Workday’s motion to dismiss with respect to the anti-discrimination statutes based on an employment agency theory, without leave to amend.

In addition, the Court held that Workday may be liable on an agency theory, as Plaintiff plausibly alleged Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.  Id.  The Court determined if it reasoned otherwise, and accepted Workday’s arguments, then companies would “escape liability for hiring decisions by saying that function has been handed to over to someone else (or here, artificial intelligence).”  Id. at 8.  The Court determined Plaintiff’s allegations that Workday’s decision-making tools “make hiring decisions” as it’s software can “automatically disposition[] or move[] candidates forward in the recruiting process” were plausible.  Id. at 9.

The Court opined that given Workday’s allegedly “crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”  Id.  In regards to artificial intelligence, the Court noted “Workday’s role in the hiring process was no less significant because it allegedly happens through artificial intelligence,” and the Court declined to “draw[] an artificial distinction between software decision-makers and human decision-makers,” [sic] as any distinction would “gut anti-discrimination laws in the modern era.”  Id. at 10.

Accordingly, the Court denied Workday’s motion to dismiss Plaintiff’s federal discrimination claims.

Disparate Impact Claims

The Court next denied Workday’s motion to dismiss Plaintiff’s disparate impact discrimination claims as Plaintiff adequately alleged all elements of a prima facie case for disparate impact.

First, Plaintiff’s amended complaint asserted that Workday’s use of algorithmic decision-making tools to screen applicants including training data from personality tests had a disparate impact on job-seekers in certain protected categories.  Second, the Court similarly found disparate treatment present and recognized Plaintiff’s assertions were not typical.  “Unlike a typical employment discrimination case where the dispute centers on the plaintiff’s application to a single job, [Plaintiff] has applied to and been rejected from over 100 jobs for which he was allegedly qualified.”  Id. at 14.  The Court reasoned the “common denominator” for these positions was Workday and the platform Workday provided to companies for application intake and screening.  Id.

The Court held “[t]he zero percent success rate at passing Workday’s initial screening” combined with Plaintiff’s allegations of bias in Workday’s training data and tools plausibly supported an inference that Workday’s algorithmic tools disproportionately rejects applicants based on factors other than qualifications, such as a candidate’s race, age, or disability.  Id. at 15.  The Court therefore denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA.  Id. at 16.

Intentional Discrimination Claims

The Court granted Workday’s motion to dismiss Plaintiff’s claims that Workday intentionally discriminated against him based on race and age.  Id.  The Court found that Plaintiff sufficiently alleged he was qualified through his various degrees and qualifications and areas of expertise, supported by his work experience.  However, the Court found Plaintiff’s allegations that Workday intended its screening tools to be discriminatory as “Workday [was] aware of the discriminatory effects of its applicant screening tools” was not enough to satisfy his pleading burden.  Id. at 18.  Accordingly, the Court granted Workday’s motion to dismiss Plaintiff’s intentional discrimination claims under Title VII, the ADEA, and § 1981, without leave to amend, but left open the door for Plaintiff to amend if a discriminatory intention is revealed during future discovery.  Id.   Finally, the Court granted Workday’s motion to dismiss Plaintiff’s California’s Fair Employment and Housing Act with leave to amend.

Implications For Employers

The Court’s resolution of employer liability for software vendors that provide AI-screening tools for employers centered on whether those tools were involved in “traditional employment decisions.”  Here, the Court held that Plaintiff sufficiently alleged that Workday was an agent for employers since it made employment decisions in the screening process through the use of artificial intelligence.

This decision likely will be used as a roadmap for the plaintiffs’ bar to bring discrimination claims against third-party vendors involved in the employment decision process, especially those using algorithmic software to make those decisions. Companies should also take heed, especially given the EEOC’s prior guidance that suggests employers should be auditing their vendors for the impact of their use of artificial intelligence.

California Federal Court Refuses To Dismiss Wiretapping Class Action Involving Company’s Use Of Third-Party AI Software

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan Norimoto

Duane Morris Takeaways:  On July 5, 2024, in Jones, et al. v. Peloton Interactive, Inc., No. 23-CV-1082, 2024 WL 3315989 (S.D. Cal. July 5, 2024), Judge M. James Lorenz of the U.S. District Court for the Southern District of California denied a motion to dismiss a class action complaint alleging that a company’s use of a third party AI-powered chat feature embedded in the company’s website aided and abetted an interception in violation of the California Invasion of Privacy Act (CIPA).  Judge Lorenz was unpersuaded by the company’s arguments that the third-party functioned as an extension of the company rather than as a third-party eavesdropper.  Instead, the Court found that the complaint had sufficient facts to plausibly allege that the third party used the chats to improve its own AI algorithm and thus was more akin to a third-party eavesdropper for which the company could be held liable for aiding and abetting wiretapping under the CIPA.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that third-party AI-powered software embedded in defendants’ websites or other processes and technologies captured plaintiffs’ information and sent it to the third party.  A common claim raised in these cases is a claim under federal or state wiretap acts and seeking hundreds of millions or billions of dollars in statutory damages.  No wiretap claim can succeed, however, where the plaintiff has consented to the embedded technology’s receipt of their communications.  See, e.g., Smith v. Facebook, Inc., 262 F. Supp. 3d 943, 955 (N.D. Cal. 2017) (dismissing CIPA claim involving embedded Meta Pixel technology because plaintiffs consented to alleged interceptions by Meta via their Facebook user agreements).

In Jones, Plaintiffs brought suit against an exercise equipment and media company.  According to Plaintiffs, the defendant company used third-party software embedded in its website’s chat feature.  Id. at *1.  Plaintiffs further alleged that the software routed the communications directly to the third party without Plaintiffs’ consent, thereby allowing the third party to use the content of the communications to “to improve the technological function and capabilities of its proprietary, patented artificial intelligence software.”  Id. at **1, 4.

Based on these allegations, Plaintiffs alleged a claim for aiding and abetting an unlawful interception and use of the intercepted information under California’s wiretapping statute, CIPA § 631.  Id. at *2.  Although Plaintiffs did not allege any actual damages, see ECF No. 1, the statutory damages they sought totaled at least $1 billion.  See id. ¶ 33 (alleging hundreds of thousands of class members); Cal. Penal Code. § 637.2 (setting forth statutory damages of $5,000 per violation).  The company moved to dismiss under Rule 12(b)(6), arguing that the “party exception” to CIPA applied because the third-party software “functions as an extension of [the company] rather than as a third-party eavesdropper.”  2024 WL 3315989, at *2.

The Court’s Opinion

The Court denied the company’s motion and allowed Plaintiffs’ CIPA claim to proceed to discovery.

The CIPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication.  Id. at *2.  To answer the question for purposes of CIPA’s party exception of whether the embedded chat software provider was more akin to a party or a third-party eavesdropper, the Court found that courts look to the “technical context of the case.”  Id. at *3.  As the Court explained, a software provider can be held liable as a third party under CIPA if that entity listens in on a consensual conversation where the entity “uses the collected data for its own commercial purposes.”  Id.  By contrast, the Court further explained, if the software provider merely collects, refines, and relays the information obtained on the company website back to the company “in aid of [defendant’s] business” then it functions as a tool and not as a third party.  Id.

Guided by this framework, the Court found sufficient allegations that the software provider used the chats collected on the company’s website for its own purposes of improving its AI-driven algorithm.  Id. at *4.  Therefore, according to the Court, the complaint sufficiently alleged that the software provider was “more than a mere ‘extension’” of the company, such that CIPA’s party exemption did not apply and Plaintiffs sufficiently stated a claim for the company’s aiding and abetting of the software provider’s wiretap violation.  Id.

Implications For Companies

The Court’s opinion serves as a cautionary tale for companies using third-party AI-powered processes and technologies that collect customer communications and information.  As the ruling shows, litigation risk associated with companies’ use of third-party AI-powered algorithms is not limited to complaints alleging damaging outcomes such as discriminatory impacts, such as plaintiffs alleged in Louis v. Saferent Sols., LLC, 685 F. Supp. 3d 19, 41 (D. Mass. 2023) (denying motion to dismiss claim under Fair Housing Act against landlord in conjunction with landlord’s use of algorithm used to calculate risk of leasing a property to a particular tenant).  In addition, companies face the risk of high-stakes claims for statutory damages under wiretap statutes associated with companies’ use of third-party AI-powered algorithms embedded in their websites, even if the third party’s only use of the algorithm is to improve the algorithm and even if no actual damages are alleged.

As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, organizations should consider in light of Jones whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI in additional detail.  Doing so could deter or help defend a future AI class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ wiretap acts and consumer fraud acts, and seeking multimillion-dollar and billion-dollar statutory damages.

California Federal Court Rejects AI Class Action Plaintiffs’ Cherry-Picking Of AI Algorithm Test Results And Orders Production Of All Results And Account Settings

By Gerald L. Maatman, Jr., Justin R. Donoho, and Brandon Spurlock

Duane Morris TakeawaysOn June 24, 2024, Magistrate Judge Robert Illman of the U.S. District Court for the Northern District of California ordered a group of authors alleging copyright infringement by a maker of generative artificial intelligence to produce information relating to pre-suit algorithmic testing in Tremblay v. OpenAI, Inc., No. 23-CV-3223 (N.D. Cal. June 13, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of AI and relying on cherry-picked results from their testing of the AI-based algorithms at issue cannot simultaneously withhold during discovery their negative testing results and the account settings used to produce any results.  The Court’s reasoning applies not only in gen AI cases, but also other AI cases such as website advertising technology cases.

Background

This case is one of over a dozen class actions filed in the last two years alleging that makers of generative AI technologies violated copyright laws by training their algorithms on copyrighted content, or that they violated wiretapping, data privacy, and other laws by training their algorithms on personal information.

It is also one of the hundreds of class actions filed in the last two years involving AI technologies that perform not only gen AI but also facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, and more.

In Tremblay v. OpenAI, plaintiffs (a group of authors) allege that an AI company trained its algorithm by “copying massive amounts of text” to enable it to “emit convincingly naturalistic text outputs in response to user prompts.”  Id. at 1.  Plaintiffs allege these outputs include summaries that are so accurate that the algorithm must retain knowledge of the ingested copyrighted works in order to output similar textual content.  Id. at 2.  An exhibit to the complaint displaying the algorithm’s prompts and outputs purports to support these allegations.  Id.

The AI company sought discovery of (a) the account settings; and (b) the algorithm’s prompts and outputs that “did not” include the plaintiffs’ “preferred, cherry-picked” results.  Id. (emphasis in original).  The plaintiffs refused, citing work-product privilege, which protects from discovery documents prepared in anticipation of litigation or for trial.  The AI company argued that the authors waived that protection by revealing their preferred prompts and outputs, and asked the court to order production of the negative prompts and outputs, too, and all related account settings.  Id. at 2-3.

The Court’s Decision

The Court agreed with the AI company and ordered production of the account settings and all of plaintiffs’ pre-suit algorithmic testing results, including any negative ones, for four reasons.

First, the Court held that the algorithmic testing results were not work product but “more in the nature of bare facts.”  Id. at 5-6.

Second, the Court determined that “even assuming arguendo” that the work-product privilege applied, the privilege was waived “by placing a large subset of these facts in the [complaint].”  Id. at 6.

Third, the Court reasoned that the negative testing results were relevant to the AI company’s defenses, notwithstanding the plaintiffs’ argument that the negative testing results were irrelevant to their claims.  Id. at 6.

Finally, the Court rejected the plaintiffs’ argument that the AI company can simply interrogate the algorithm itself.  As the Court explained, “without knowing the account settings used by Plaintiffs to generate their positive and negative results, and without knowing the exact formulation of the prompts used to generate Plaintiffs’ negative results, Defendants would be unable to replicate the same results.”  Id.

Implications For Companies

This case is a win for defendants of class actions based on alleged outputs of AI-based algorithms.  In such cases, the Tremblay decision can be cited as useful precedent for seeking discovery from recalcitrant plaintiffs of all of plaintiffs’ pre-suit prompts and outputs, and all related account settings.  The court’s fourfold reasoning in Tremblay applies not only in gen AI cases but also other AI cases.  For example, in website advertising technology (adtech) cases, plaintiffs should not be able to withhold their adtech settings (the account settings), their browsing histories and behaviors (the prompts), and all documents relating to targeted advertising they allegedly received as a result, any related purchases, and alleged damages (the outputs).  As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, the implications of Tremblay may reach far and wide.

Illinois Federal Court Rejects Class Action Because An AI-Powered Porn Filter Does Not Violate The BIPA

By Gerald L. Maatman, Jr., Justin R. Donoho, and Tyler Z. Zmick

Duane Morris TakeawaysIn a consequential ruling on June 13, 2024, Judge Sunil Harjani of the U.S. District Court for the Northern District of Illinois dismissed a class action brought under the Illinois Biometric Information Privacy Act (BIPA) in Martell v. X Corp., Case No. 23-CV-5449, 2024 WL 3011353 (N.D. Ill. June 13, 2024).  The ruling is significant as it shows that plaintiffs alleging that cutting-edge technologies violate the BIPA face significant hurdles to support the plausibility of their claims when the technology neither performs facial recognition nor records distinct facial measurements as part of any facial recognition process.

Background

This case is one of over 400 class actions filed in 2023 alleging that companies improperly obtained individuals’ biometric identifiers and biometric information in violation of the BIPA.

In Martell v. X Corp., Plaintiff alleged that he uploaded a photograph containing his face to the social media platform “X” (formerly known as Twitter), which X then analyzed for nudity and other inappropriate content using a product called “PhotoDNA.”  According to Plaintiff, PhotoDNA created a unique digital signature of his face-containing photograph known as a “hash” to compare against the hashes of other photographs, thus necessarily obtaining a “scan of … face geometry” in violation of the BIPA, 740 ILCS 14/10.

X Corp. moved to dismiss Plaintiff’s BIPA claim, arguing, among other things, that Plaintiff failed to allege that PhotoDNA obtained a scan of face geometry because (1) PhotoDNA did not perform facial recognition; and (2) the hash obtained by PhotoDNA could not be used to re-identify him.

The Court’s Opinion And Its Dual Significance

The Court granted X Corp.’s motion to dismiss based on both of these arguments.  First, the Court found no plausible allegations of a scan of face geometry because “PhotoDNA is not facial recognition software.”  Martell, 2024 WL 3011353, at *2 (N.D. Ill. June 13, 2024).  As the Court explained, “Plaintiff does not allege that the hash process takes a scan of face geometry, rather he summarily concludes that it must. The Court cannot accept such conclusions as facts adequate to state a plausible claim.”  Id. at *3.

In other cases in which plaintiffs have brought BIPA claims involving face-related technologies performing functions other than facial recognition, companies have received mixed rulings when challenging the plausibility of allegations that their technologies obtained facial data “biologically unique to the individual.”  740 ILCS 14/5(c).  BIPA defendants have been similarly successful at the pleading stage as X Corp., for example, in securing dismissal of BIPA lawsuits involving virtual try­-on technologies that allow customers to use their computers to visualize glasses, makeup, or other accessories on their face.  See Clarke v. Aveda Corp., 2023 WL 9119927, at *2 (N.D. Ill. Dec. 1, 2023); Castelaz v. Estee Lauder Cos., Inc., 2024 WL 136872, at *7 (N.D. Ill. Jan. 10, 2024).  Defendants have been less successful at the pleading stage and continue to litigate their cases, however, in cases involving software verifying compliance with U.S. passport photo requirements, Daichendt v. CVS Pharmacy, Inc., 2023 WL 3559669, at *2 (N.D. Ill. May 4, 2023), and software detecting fever from the forehead and whether the patient is wearing a facemask, Trio v. Turing Video, Inc., 2022 WL 4466050, at *13 (N.D. Ill. Sept. 26, 2022).  Martell bolsters these mixed rulings in non-facial recognition cases in favor of defendants, with its finding that mere allegations of verification that a face-containing picture is not pornographic are insufficient to establish that the defendant obtained any biometric identifier or biometric information.

Second, the Court found no plausible allegations of a scan of face geometry because “Plaintiff’s Complaint does not include factual allegations about the hashes including that it conducts a face geometry scan of individuals in the photo.”  Martell, 2024 WL 3011353, at *3.  Instead, the Court found, obtaining a scan of face geometry means “zero[ing] in on [a face’s] unique contours to create a ‘template’ that maps and records [the individual’s] distinct facial measurements.”  Id.

This holding is significant and has potential implications for BIPA suits based on AI‑based, modern facial recognition systems in which the AI transforms photographs into numerical expressions that can be compared to determine their similarity, similar to the way X Corp.’s PhotoDNA transformed a photograph containing a face into a unique numerical hash.  Older, non-AI facial recognition systems in place at the time of the BIPA’s enactment in 2008, by contrast, attempt to identify individuals by using measurements of face geometry that identify distinguishing features of each subject’s face.  These older systems construct a facial graph from key landmarks such as the corners of the eyes, tip of the nose, corners of the mouth, and chin.  Does AI-based facial recognition — which does not “map[] and record[] … distinct facial measurements” (id. at *3) like these older systems — perform a scan of face geometry under the BIPA?  One court addressing this question raised in opposing summary judgment briefs and opined on by opposing experts held: “This is a quintessential dispute of fact for the jury to decide.”  In Re Facebook Biometric Info. Priv. Litig., 2018 WL 2197546, at *3 (N.D. Cal. May 14, 2018).  In short, whether AI-based facial recognitions systems violate the BIPA remains “the subject of debate.”  “The Sedona Conference U.S. Biometric Systems Privacy Primer,” The Sedona Conference Journal, vol. 25, at 200 (May 2024).  The Court’s holding in Martell adds to this mosiac and suggests that plaintiffs challenging AI­-based facial recognition systems under the BIPA will have significant hurdles to prove that the technology obtains a scan of face geometry.

Implications for Companies

The Court’s dismissal of conclusory allegations is a win for defendants’ whose cutting-edge technologies neither perform facial recognition nor record distinct facial measurements as part of any facial recognition process.  While undoubtedly litigation over the BIPA will continue, the Martell decision supplies useful precedent for companies facing BIPA lawsuits containing insufficient allegations that they have obtained a scan of facial geometry unique to an individual.

© 2009-2025 Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress