California Federal Court Denies Motion To Dismiss Artificial Intelligence Employment Discrimination Lawsuit

By Alex W. Karasik, Gerald L. Maatman, Jr. and George J. Schaller

Duane Morris Takeaways:  In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80)Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.  

This development follows Workday’s first successful Motion to Dismiss, which we blogged about here, and the EEOC’s amicus brief filing, which we blogged on here

Case Background

Plaintiff is an African American male over the age of 40, with a bachelor’s degree in finance from Morehouse College, an all-male Historically Black College and University, and an honors graduate degree. Id. at 2. Plaintiff also alleges he suffered from anxiety and depression.  Since 2017, Plaintiff applied to over 100 jobs with companies that use Workday’s screening tools.  In many applications, Plaintiff alleges he was required to take a “Workday-branded assessment and/or personality test.”  Plaintiff asserts these assessments “likely . . . reveal mental health disorders or cognitive impairments,” so others who suffer from anxiety and depression are “likely to perform worse  … and [are] screened out.”  Id. at 2-3.  Plaintiff was allegedly denied employment through Workday’s platform across all submitted applications.

Plaintiff alleges Workday’s algorithmic decision-making tools discriminate against job applicants who are African-American, over the age of 40, and/or are disabled.  Id. at 3.  In support of these allegations, Plaintiff claims that in one instance, he applied for a position at 12:55 a.m. and his application was rejected less than an hour later.  Plaintiff brought claims under Title VII of the Civil Rights Act of 1964 (“Title VII”), the Civil Rights Act of 1866 (“Section 1981”), the Age Discrimination in Employment Act of 1967 (“ADEA”), and the ADA Amendments Act of 2008 (“ADA”), for intentional discrimination on the basis of race and age, and disparate impact discrimination on the basis of race, age, and disability. Plaintiff also brings a claim for aiding and abetting race, disability, and age discrimination against Workday under California’s Fair Employment and Housing Act (“FEHA”).  Workday moved to dismiss, where Plaintiff’s opposition was supported by an amicus brief filed by the EEOC.

The Court’s Decision

The Court granted in part and denied in part Workday’s motion to dismiss.  At the outset of its opinion, the Court noted that Plaintiff alleged Workday was liable for employment discrimination, under Title VII, the ADEA, and the ADA, on three theories: as an (1) employment agency; (2) agent of employers; and (3) an indirect employer. Id. at 5.

The Court opined that relevant statute prohibits discrimination “not just by employers but also by agents of those employers,” so an employer cannot “escape liability for discrimination by delegating [] traditional functions, like hiring, to a third party.”  Id.  Therefore, an employer’s agent can be independently liable when the employer has delegated to the agent “functions [that] are traditionally exercised by the employer.”  Id.

In regards to the “employment agency” theory, the Court reasoned employment agencies “procure employees for an employer” – meaning – “they find candidates for an employer’s position; they do not actually employ those employees.”  Id. at 7.  The Court further reasoned employment agencies are liable when they “fail or refuse to refer” individuals for consideration by employers on prohibited bases.  Id. The Court held Plaintiff did not sufficiently allege Workday finds employees for employers such that Workday is an employment agency.  Accordingly, the Court granted Workday’s motion to dismiss with respect to the anti-discrimination statutes based on an employment agency theory, without leave to amend.

In addition, the Court held that Workday may be liable on an agency theory, as Plaintiff plausibly alleged Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.  Id.  The Court determined if it reasoned otherwise, and accepted Workday’s arguments, then companies would “escape liability for hiring decisions by saying that function has been handed to over to someone else (or here, artificial intelligence).”  Id. at 8.  The Court determined Plaintiff’s allegations that Workday’s decision-making tools “make hiring decisions” as it’s software can “automatically disposition[] or move[] candidates forward in the recruiting process” were plausible.  Id. at 9.

The Court opined that given Workday’s allegedly “crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”  Id.  In regards to artificial intelligence, the Court noted “Workday’s role in the hiring process was no less significant because it allegedly happens through artificial intelligence,” and the Court declined to “draw[] an artificial distinction between software decision-makers and human decision-makers,” [sic] as any distinction would “gut anti-discrimination laws in the modern era.”  Id. at 10.

Accordingly, the Court denied Workday’s motion to dismiss Plaintiff’s federal discrimination claims.

Disparate Impact Claims

The Court next denied Workday’s motion to dismiss Plaintiff’s disparate impact discrimination claims as Plaintiff adequately alleged all elements of a prima facie case for disparate impact.

First, Plaintiff’s amended complaint asserted that Workday’s use of algorithmic decision-making tools to screen applicants including training data from personality tests had a disparate impact on job-seekers in certain protected categories.  Second, the Court similarly found disparate treatment present and recognized Plaintiff’s assertions were not typical.  “Unlike a typical employment discrimination case where the dispute centers on the plaintiff’s application to a single job, [Plaintiff] has applied to and been rejected from over 100 jobs for which he was allegedly qualified.”  Id. at 14.  The Court reasoned the “common denominator” for these positions was Workday and the platform Workday provided to companies for application intake and screening.  Id.

The Court held “[t]he zero percent success rate at passing Workday’s initial screening” combined with Plaintiff’s allegations of bias in Workday’s training data and tools plausibly supported an inference that Workday’s algorithmic tools disproportionately rejects applicants based on factors other than qualifications, such as a candidate’s race, age, or disability.  Id. at 15.  The Court therefore denied Workday’s motion to dismiss the disparate impact claims under Title VII, the ADEA, and the ADA.  Id. at 16.

Intentional Discrimination Claims

The Court granted Workday’s motion to dismiss Plaintiff’s claims that Workday intentionally discriminated against him based on race and age.  Id.  The Court found that Plaintiff sufficiently alleged he was qualified through his various degrees and qualifications and areas of expertise, supported by his work experience.  However, the Court found Plaintiff’s allegations that Workday intended its screening tools to be discriminatory as “Workday [was] aware of the discriminatory effects of its applicant screening tools” was not enough to satisfy his pleading burden.  Id. at 18.  Accordingly, the Court granted Workday’s motion to dismiss Plaintiff’s intentional discrimination claims under Title VII, the ADEA, and § 1981, without leave to amend, but left open the door for Plaintiff to amend if a discriminatory intention is revealed during future discovery.  Id.   Finally, the Court granted Workday’s motion to dismiss Plaintiff’s California’s Fair Employment and Housing Act with leave to amend.

Implications For Employers

The Court’s resolution of employer liability for software vendors that provide AI-screening tools for employers centered on whether those tools were involved in “traditional employment decisions.”  Here, the Court held that Plaintiff sufficiently alleged that Workday was an agent for employers since it made employment decisions in the screening process through the use of artificial intelligence.

This decision likely will be used as a roadmap for the plaintiffs’ bar to bring discrimination claims against third-party vendors involved in the employment decision process, especially those using algorithmic software to make those decisions. Companies should also take heed, especially given the EEOC’s prior guidance that suggests employers should be auditing their vendors for the impact of their use of artificial intelligence.

California Federal Court Refuses To Dismiss Wiretapping Class Action Involving Company’s Use Of Third-Party AI Software

By Gerald L. Maatman, Jr., Justin R. Donoho, and Nathan Norimoto

Duane Morris Takeaways:  On July 5, 2024, in Jones, et al. v. Peloton Interactive, Inc., No. 23-CV-1082, 2024 WL 3315989 (S.D. Cal. July 5, 2024), Judge M. James Lorenz of the U.S. District Court for the Southern District of California denied a motion to dismiss a class action complaint alleging that a company’s use of a third party AI-powered chat feature embedded in the company’s website aided and abetted an interception in violation of the California Invasion of Privacy Act (CIPA).  Judge Lorenz was unpersuaded by the company’s arguments that the third-party functioned as an extension of the company rather than as a third-party eavesdropper.  Instead, the Court found that the complaint had sufficient facts to plausibly allege that the third party used the chats to improve its own AI algorithm and thus was more akin to a third-party eavesdropper for which the company could be held liable for aiding and abetting wiretapping under the CIPA.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that third-party AI-powered software embedded in defendants’ websites or other processes and technologies captured plaintiffs’ information and sent it to the third party.  A common claim raised in these cases is a claim under federal or state wiretap acts and seeking hundreds of millions or billions of dollars in statutory damages.  No wiretap claim can succeed, however, where the plaintiff has consented to the embedded technology’s receipt of their communications.  See, e.g., Smith v. Facebook, Inc., 262 F. Supp. 3d 943, 955 (N.D. Cal. 2017) (dismissing CIPA claim involving embedded Meta Pixel technology because plaintiffs consented to alleged interceptions by Meta via their Facebook user agreements).

In Jones, Plaintiffs brought suit against an exercise equipment and media company.  According to Plaintiffs, the defendant company used third-party software embedded in its website’s chat feature.  Id. at *1.  Plaintiffs further alleged that the software routed the communications directly to the third party without Plaintiffs’ consent, thereby allowing the third party to use the content of the communications to “to improve the technological function and capabilities of its proprietary, patented artificial intelligence software.”  Id. at **1, 4.

Based on these allegations, Plaintiffs alleged a claim for aiding and abetting an unlawful interception and use of the intercepted information under California’s wiretapping statute, CIPA § 631.  Id. at *2.  Although Plaintiffs did not allege any actual damages, see ECF No. 1, the statutory damages they sought totaled at least $1 billion.  See id. ¶ 33 (alleging hundreds of thousands of class members); Cal. Penal Code. § 637.2 (setting forth statutory damages of $5,000 per violation).  The company moved to dismiss under Rule 12(b)(6), arguing that the “party exception” to CIPA applied because the third-party software “functions as an extension of [the company] rather than as a third-party eavesdropper.”  2024 WL 3315989, at *2.

The Court’s Opinion

The Court denied the company’s motion and allowed Plaintiffs’ CIPA claim to proceed to discovery.

The CIPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication.  Id. at *2.  To answer the question for purposes of CIPA’s party exception of whether the embedded chat software provider was more akin to a party or a third-party eavesdropper, the Court found that courts look to the “technical context of the case.”  Id. at *3.  As the Court explained, a software provider can be held liable as a third party under CIPA if that entity listens in on a consensual conversation where the entity “uses the collected data for its own commercial purposes.”  Id.  By contrast, the Court further explained, if the software provider merely collects, refines, and relays the information obtained on the company website back to the company “in aid of [defendant’s] business” then it functions as a tool and not as a third party.  Id.

Guided by this framework, the Court found sufficient allegations that the software provider used the chats collected on the company’s website for its own purposes of improving its AI-driven algorithm.  Id. at *4.  Therefore, according to the Court, the complaint sufficiently alleged that the software provider was “more than a mere ‘extension’” of the company, such that CIPA’s party exemption did not apply and Plaintiffs sufficiently stated a claim for the company’s aiding and abetting of the software provider’s wiretap violation.  Id.

Implications For Companies

The Court’s opinion serves as a cautionary tale for companies using third-party AI-powered processes and technologies that collect customer communications and information.  As the ruling shows, litigation risk associated with companies’ use of third-party AI-powered algorithms is not limited to complaints alleging damaging outcomes such as discriminatory impacts, such as plaintiffs alleged in Louis v. Saferent Sols., LLC, 685 F. Supp. 3d 19, 41 (D. Mass. 2023) (denying motion to dismiss claim under Fair Housing Act against landlord in conjunction with landlord’s use of algorithm used to calculate risk of leasing a property to a particular tenant).  In addition, companies face the risk of high-stakes claims for statutory damages under wiretap statutes associated with companies’ use of third-party AI-powered algorithms embedded in their websites, even if the third party’s only use of the algorithm is to improve the algorithm and even if no actual damages are alleged.

As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, organizations should consider in light of Jones whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI in additional detail.  Doing so could deter or help defend a future AI class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ wiretap acts and consumer fraud acts, and seeking multimillion-dollar and billion-dollar statutory damages.

California Federal Court Rejects AI Class Action Plaintiffs’ Cherry-Picking Of AI Algorithm Test Results And Orders Production Of All Results And Account Settings

By Gerald L. Maatman, Jr., Justin R. Donoho, and Brandon Spurlock

Duane Morris TakeawaysOn June 24, 2024, Magistrate Judge Robert Illman of the U.S. District Court for the Northern District of California ordered a group of authors alleging copyright infringement by a maker of generative artificial intelligence to produce information relating to pre-suit algorithmic testing in Tremblay v. OpenAI, Inc., No. 23-CV-3223 (N.D. Cal. June 13, 2024).  The ruling is significant as it shows that plaintiffs who file class action complaints alleging improper use of AI and relying on cherry-picked results from their testing of the AI-based algorithms at issue cannot simultaneously withhold during discovery their negative testing results and the account settings used to produce any results.  The Court’s reasoning applies not only in gen AI cases, but also other AI cases such as website advertising technology cases.

Background

This case is one of over a dozen class actions filed in the last two years alleging that makers of generative AI technologies violated copyright laws by training their algorithms on copyrighted content, or that they violated wiretapping, data privacy, and other laws by training their algorithms on personal information.

It is also one of the hundreds of class actions filed in the last two years involving AI technologies that perform not only gen AI but also facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, and more.

In Tremblay v. OpenAI, plaintiffs (a group of authors) allege that an AI company trained its algorithm by “copying massive amounts of text” to enable it to “emit convincingly naturalistic text outputs in response to user prompts.”  Id. at 1.  Plaintiffs allege these outputs include summaries that are so accurate that the algorithm must retain knowledge of the ingested copyrighted works in order to output similar textual content.  Id. at 2.  An exhibit to the complaint displaying the algorithm’s prompts and outputs purports to support these allegations.  Id.

The AI company sought discovery of (a) the account settings; and (b) the algorithm’s prompts and outputs that “did not” include the plaintiffs’ “preferred, cherry-picked” results.  Id. (emphasis in original).  The plaintiffs refused, citing work-product privilege, which protects from discovery documents prepared in anticipation of litigation or for trial.  The AI company argued that the authors waived that protection by revealing their preferred prompts and outputs, and asked the court to order production of the negative prompts and outputs, too, and all related account settings.  Id. at 2-3.

The Court’s Decision

The Court agreed with the AI company and ordered production of the account settings and all of plaintiffs’ pre-suit algorithmic testing results, including any negative ones, for four reasons.

First, the Court held that the algorithmic testing results were not work product but “more in the nature of bare facts.”  Id. at 5-6.

Second, the Court determined that “even assuming arguendo” that the work-product privilege applied, the privilege was waived “by placing a large subset of these facts in the [complaint].”  Id. at 6.

Third, the Court reasoned that the negative testing results were relevant to the AI company’s defenses, notwithstanding the plaintiffs’ argument that the negative testing results were irrelevant to their claims.  Id. at 6.

Finally, the Court rejected the plaintiffs’ argument that the AI company can simply interrogate the algorithm itself.  As the Court explained, “without knowing the account settings used by Plaintiffs to generate their positive and negative results, and without knowing the exact formulation of the prompts used to generate Plaintiffs’ negative results, Defendants would be unable to replicate the same results.”  Id.

Implications For Companies

This case is a win for defendants of class actions based on alleged outputs of AI-based algorithms.  In such cases, the Tremblay decision can be cited as useful precedent for seeking discovery from recalcitrant plaintiffs of all of plaintiffs’ pre-suit prompts and outputs, and all related account settings.  The court’s fourfold reasoning in Tremblay applies not only in gen AI cases but also other AI cases.  For example, in website advertising technology (adtech) cases, plaintiffs should not be able to withhold their adtech settings (the account settings), their browsing histories and behaviors (the prompts), and all documents relating to targeted advertising they allegedly received as a result, any related purchases, and alleged damages (the outputs).  As AI-related technologies continue their growth spurt, and litigation in this area spurts accordingly, the implications of Tremblay may reach far and wide.

Illinois Federal Court Rejects Class Action Because An AI-Powered Porn Filter Does Not Violate The BIPA

By Gerald L. Maatman, Jr., Justin R. Donoho, and Tyler Z. Zmick

Duane Morris TakeawaysIn a consequential ruling on June 13, 2024, Judge Sunil Harjani of the U.S. District Court for the Northern District of Illinois dismissed a class action brought under the Illinois Biometric Information Privacy Act (BIPA) in Martell v. X Corp., Case No. 23-CV-5449, 2024 WL 3011353 (N.D. Ill. June 13, 2024).  The ruling is significant as it shows that plaintiffs alleging that cutting-edge technologies violate the BIPA face significant hurdles to support the plausibility of their claims when the technology neither performs facial recognition nor records distinct facial measurements as part of any facial recognition process.

Background

This case is one of over 400 class actions filed in 2023 alleging that companies improperly obtained individuals’ biometric identifiers and biometric information in violation of the BIPA.

In Martell v. X Corp., Plaintiff alleged that he uploaded a photograph containing his face to the social media platform “X” (formerly known as Twitter), which X then analyzed for nudity and other inappropriate content using a product called “PhotoDNA.”  According to Plaintiff, PhotoDNA created a unique digital signature of his face-containing photograph known as a “hash” to compare against the hashes of other photographs, thus necessarily obtaining a “scan of … face geometry” in violation of the BIPA, 740 ILCS 14/10.

X Corp. moved to dismiss Plaintiff’s BIPA claim, arguing, among other things, that Plaintiff failed to allege that PhotoDNA obtained a scan of face geometry because (1) PhotoDNA did not perform facial recognition; and (2) the hash obtained by PhotoDNA could not be used to re-identify him.

The Court’s Opinion And Its Dual Significance

The Court granted X Corp.’s motion to dismiss based on both of these arguments.  First, the Court found no plausible allegations of a scan of face geometry because “PhotoDNA is not facial recognition software.”  Martell, 2024 WL 3011353, at *2 (N.D. Ill. June 13, 2024).  As the Court explained, “Plaintiff does not allege that the hash process takes a scan of face geometry, rather he summarily concludes that it must. The Court cannot accept such conclusions as facts adequate to state a plausible claim.”  Id. at *3.

In other cases in which plaintiffs have brought BIPA claims involving face-related technologies performing functions other than facial recognition, companies have received mixed rulings when challenging the plausibility of allegations that their technologies obtained facial data “biologically unique to the individual.”  740 ILCS 14/5(c).  BIPA defendants have been similarly successful at the pleading stage as X Corp., for example, in securing dismissal of BIPA lawsuits involving virtual try­-on technologies that allow customers to use their computers to visualize glasses, makeup, or other accessories on their face.  See Clarke v. Aveda Corp., 2023 WL 9119927, at *2 (N.D. Ill. Dec. 1, 2023); Castelaz v. Estee Lauder Cos., Inc., 2024 WL 136872, at *7 (N.D. Ill. Jan. 10, 2024).  Defendants have been less successful at the pleading stage and continue to litigate their cases, however, in cases involving software verifying compliance with U.S. passport photo requirements, Daichendt v. CVS Pharmacy, Inc., 2023 WL 3559669, at *2 (N.D. Ill. May 4, 2023), and software detecting fever from the forehead and whether the patient is wearing a facemask, Trio v. Turing Video, Inc., 2022 WL 4466050, at *13 (N.D. Ill. Sept. 26, 2022).  Martell bolsters these mixed rulings in non-facial recognition cases in favor of defendants, with its finding that mere allegations of verification that a face-containing picture is not pornographic are insufficient to establish that the defendant obtained any biometric identifier or biometric information.

Second, the Court found no plausible allegations of a scan of face geometry because “Plaintiff’s Complaint does not include factual allegations about the hashes including that it conducts a face geometry scan of individuals in the photo.”  Martell, 2024 WL 3011353, at *3.  Instead, the Court found, obtaining a scan of face geometry means “zero[ing] in on [a face’s] unique contours to create a ‘template’ that maps and records [the individual’s] distinct facial measurements.”  Id.

This holding is significant and has potential implications for BIPA suits based on AI‑based, modern facial recognition systems in which the AI transforms photographs into numerical expressions that can be compared to determine their similarity, similar to the way X Corp.’s PhotoDNA transformed a photograph containing a face into a unique numerical hash.  Older, non-AI facial recognition systems in place at the time of the BIPA’s enactment in 2008, by contrast, attempt to identify individuals by using measurements of face geometry that identify distinguishing features of each subject’s face.  These older systems construct a facial graph from key landmarks such as the corners of the eyes, tip of the nose, corners of the mouth, and chin.  Does AI-based facial recognition — which does not “map[] and record[] … distinct facial measurements” (id. at *3) like these older systems — perform a scan of face geometry under the BIPA?  One court addressing this question raised in opposing summary judgment briefs and opined on by opposing experts held: “This is a quintessential dispute of fact for the jury to decide.”  In Re Facebook Biometric Info. Priv. Litig., 2018 WL 2197546, at *3 (N.D. Cal. May 14, 2018).  In short, whether AI-based facial recognitions systems violate the BIPA remains “the subject of debate.”  “The Sedona Conference U.S. Biometric Systems Privacy Primer,” The Sedona Conference Journal, vol. 25, at 200 (May 2024).  The Court’s holding in Martell adds to this mosiac and suggests that plaintiffs challenging AI­-based facial recognition systems under the BIPA will have significant hurdles to prove that the technology obtains a scan of face geometry.

Implications for Companies

The Court’s dismissal of conclusory allegations is a win for defendants’ whose cutting-edge technologies neither perform facial recognition nor record distinct facial measurements as part of any facial recognition process.  While undoubtedly litigation over the BIPA will continue, the Martell decision supplies useful precedent for companies facing BIPA lawsuits containing insufficient allegations that they have obtained a scan of facial geometry unique to an individual.

District Court Dismisses Data Privacy Class Action Against Health Care System For Failure To Sufficiently Allege Disclosure of PHI

By Gerald L. Maatman, Jr., Jennifer A. Riley, Justin Donoho, and Ryan T. Garippo

Duane Morris Takeaways:  On June 10, 2024, in Smart, et al. v. Main Line Health, Inc., No. 22-CV-5239, 2024 WL 2943760 (E.D. Pa. June 10, 2024), Judge Kai Scott of the U.S. District Court for the Eastern District of Pennsylvania dismissed in its entirety a class action complaint alleging that a nonprofit health system’s use of website advertising technology disclosed the plaintiff’s protected health information (“PHI”) in violation of the federal wiretap act and in commission of the common-law torts of negligence and invasion of privacy.  The ruling is significant because it shows that such claims cannot surmount Rule 12(b)(6)’s plausibility standard without specifying the PHI allegedly disclosed.

Background

This case is one of the hundreds of class actions that plaintiffs have filed nationwide alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies.  This software, often called website advertising technologies or “adtech” is a common feature on many websites in operation today; millions of companies and governmental organizations utilize it.  (See, e.g., Customer Data Platform Institute, “Trackers and Pixels Feeding Data Broker Stores” (reporting that “47% of websites using Meta Pixel, including 55% of S&P 500, 58% of retail, 42% of financial, and 33% of healthcare”); BuiltWith, “Facebook Pixel Usage Statistics” (offering access to data on over 14 million websites using the Meta Pixel and stating “[w]e know of 5,861,028 live websites using Facebook Pixel and an additional 8,181,093 sites that used Facebook Pixel historically and 2,543,263 websites in the United States”).)

In these lawsuits, plaintiffs generally allege that the defendant organization’s use of adtech violated federal and state wiretap statutes, consumer fraud statutes, and other laws, and they often seek hundreds of millions of dollars in statutory damages.  Plaintiffs have focused the bulk of their efforts to date on healthcare providers, but they have filed suits that span nearly every industry including retailers, consumer products, and universities.

In Smart, 2024 WL 2943760, at *1, Plaintiff brought suit against Main Line Health, Inc. (“Main Line”), “a non-profit health system.”  According to Plaintiff, Main Line installed the Meta Pixel on its public-facing website – not on its secure patient portal, id. at *1 n.2 – and thereby transmitted web-browsing information entered by users on the public-facing website such as:

“characteristics of individual patients’ communications with the [Main Line] website (i.e., their IP addresses, Facebook ID, cookie identifiers, device identifiers and account numbers) and the content of these communications (i.e., the buttons, links, pages, and tabs they click and view).”

Id. (quotations omitted).

Based on these allegations, Plaintiff alleged claims for violation of the Electronic Communications Privacy Act (ECPA), negligence, and invasion of privacy.  Main Line moved to dismiss under Rule 12(b)(6) for failure to state sufficient facts that, if accepted as true, would state a claim for relief that is plausible on its face.

The Court’s Opinion

The Court agreed with Main Line and dismissed all three of Plaintiff’s claims.

To state a claim for violation of the ECPA, also known as the federal wiretap act, a plaintiff must show an intentional interception of the contents of an electronic communication using a device.  Main Line, 2024 WL 2943760, at *3.  The ECPA is a one-party consent statute, meaning that there is no liability under the statute for any party to the communication “unless such communication is intercepted for the purposes of committing a criminal or tortious act in violation of the Constitution or laws of the United States or any State.”  Id. (quoting 18 U.S.C. § 2511(2)(d)); 18 U.S.C. § 2511(2)(d).

Plaintiff argued that he plausibly alleged Main Line’s criminal or tortious purpose because, under the Health Insurance Portability and Accountability Act (“HIPAA”), it is a federal crime for a health care provider to knowingly disclose PHI to another person.  The district court rejected this argument, finding Plaintiff failed to allege sufficient facts to support an inference that Main Line disclosed his PHI.  As the district court explained: “Plaintiff has not alleged which specific web pages he clicked on for his medical condition or his history of treatment with Main Line Health.”  Id. at 3 (collecting cases).

In short, the district court concluded that Plaintiff’s failure to sufficiently allege PHI was reason alone for the Court to dismiss Plaintiff’s ECPA claim.  Thus, the district court did not need to address other reasons that may have required dismissal of Plaintiff’s ECPA claims, such as (1) lack of criminal or tortious intent even if PHI had been sufficiently alleged, see, e.g., Katz-Lacabe v. Oracle Am., Inc., 668 F. Supp. 3d 928, 945 (N.D. Cal. 2023) (dismissing wiretap claim because defendant’s “purpose has plainly not been to perpetuate torts on millions of Internet users, but to make money”); Nienaber v. Overlake Hosp. Med. Ctr., 2024 WL 2133709, at *15 (W.D. Wash. May 13, 2024) (dismissing wiretap claim because “Plaintiff fails to plead a tortious or criminal use of the acquired communications, separate from the recording, interception, or transmission”); and (2) lack of any interception, see, e.g., Allen v. Novant Health, Inc., 2023 WL 5486240, at *4 (M.D.N.C. Aug. 24, 2023) (dismissing wiretap claim because an intended recipient cannot “intercept”); Glob. Pol’y Partners, LLC v. Yessin, 686 F. Supp. 2d 631, 638 (E.D. Va. 2009) (dismissing wiretap claim because the communication was sent as a different communication, not “intercepted”).

On Plaintiff’s remaining claims, the district court held that lack of sufficiently pled PHI defeated the causation element of Plaintiff’s negligence claim and defeated the element of Plaintiff’s invasion of privacy claim that any intrusion must have been “highly offensive to a reasonable person.”  Main Line, 2024 WL 2943760, at *4.

Implications For Companies

The holding of Main Line is a win for adtech class action defendants and should be instructive for courts around the country.  Other courts already have described the statutory damages imposed by ECPA as “draconian.”  See, e.g., DIRECTTV, Inc. v. Beecher, 296 F. Supp. 2d 937, 943 (S.D. Ind. 2003).  Main Line shows that, for adtech plaintiffs to plausibly plead claims for ECPA violations, negligence, or invasion of privacy, they at least need to identify what allegedly private information allegedly was disclosed via the adtech, in addition to surmounting additional hurdles under ECPA such as plausibly pleading criminal or tortious intent and an interception.

Four Best Practices For Deterring Cybersecurity And Data Privacy Class Actions And Mass Arbitrations

By Justin Donoho

Duane Morris Takeaway: Class action lawsuits and mass arbitrations alleging cybersecurity incidents and data privacy violations are rising exponentially.  Corporate counsel seeking to deter such litigation and arbitration demands from being filed against their companies should keep in mind the following four best practices: (1) add or update arbitration clauses to mitigate the risks of mass arbitration; (2) use cybersecurity best practices, including continuously improving and prioritizing compliance activities; (3) audit and adjust uses of website advertising technologies; and (4) update website terms of use, data privacy policies, and vendor agreements.

Best Practices

  1. Add or update arbitration agreements to mitigate the risks of mass arbitration

Many organizations have long been familiar with the strategy of deterring class and collective actions by presenting arbitration clauses containing class and collective action waivers prominently for web users, consumers, and employees to accept via click wrap, browse wrap, login wrap, shrink wrap, and signatures.  Such agreements would require all allegedly injured parties to file individual arbitrations in lieu of any class or collective action.  Moreover, the strategy goes, filing hundreds, thousands, or more individual arbitrations would be cost-prohibitive for so many putative plaintiffs and thus deter them from taking any action against the organization in most cases.

Over the last decade, this strategy of deterrence was effective.[1]  Times have changed.  Now enterprising plaintiffs’ attorneys with burgeoning war chests, litigation funders, and high-dollar novel claims for statutory damages are increasingly using mass arbitration to pressure organizations into agreeing to multimillion dollar settlements, just to avoid the arbitration costs.  In mass arbitrations filed with the American Arbitration Association (AAA) or Judicial Arbitration and Mediation Services (JAMS), for example, fees can total millions of dollars just to defend only 500 individual arbitrations.[2]  One study found upfront fees ranging into the tens of millions of dollars for some large mass arbitrations.[3]  Companies with old arbitration clauses have been caught off guard with mass arbitrations, have sought relief from courts to avoid having to defend these mass arbitrations, and this relief was rejected in several recent decisions where the court ordered the defendant to arbitrate and pay the required hefty mass arbitration fees.[4]

If your organization has an arbitration clause, then one of the first challenges for counsel defending many newly served class action lawsuits these days is determining whether to move to compel arbitration.  Although it could defeat the class action, is it worth the risk of mass arbitration and the potential projected costs of mass arbitration involved?  Sometimes not.

Increasingly organizations are mitigating this risk by including mechanisms in their arbitration clauses such as pre-dispute resolution clauses, mass arbitration waivers, bellwether procedures, arbitration case filing requirements, and more.  This area of the law is developing quickly.  One case to watch will be one of the first appellate cases to address the latest trend of mass arbitrations — Wallrich v. Samsung Electronics America, Inc., No. 23-2842 (7th Cir.) (argued February 15, 2024, at issue is whether the district court erred in ordering the BIPA defendant to pay over $4 million in mass arbitration fees).

  1. Use cybersecurity best practices, including continuously improving and prioritizing

IT organizations have long been familiar with the maxim that they should continuously improve their cybersecurity measures and other IT services.  Continuous improvement is part of many IT industry guidelines, such as ISO 27000, COBIT, ITIL, the NIST Cybersecurity Framework (CSF) and Special Publication 800, and the U.S. Department of Energy’s Cybersecurity Capability Maturity Model (C2M2).  Continuous improvement is becoming increasingly necessary in cybersecurity, as organizations’ IT systems and cybercriminals’ tools multiply at an increased rate.  The volume of data breach class actions doubled three times from 2019-2023:

Continuous improvement of cybersecurity measures needs to accelerate accordingly.  As always, IT organizations need to prioritize.  Priorities typically include:

  • improving IT governance;
  • complying with industry guidelines such as ISO, COBIT, ITIL, NIST, and C2M2;
  • deploying multifactor authentication, network segmentation, and other multilayered security controls;
  • staying current with identifying, prioritizing, and patching security holes as new ones continuously arise;
  • designing and continuously improving a cybersecurity incident response plan;
  • routinely practicing handling ransomware incidents with tabletop exercises (may be covered by cyber-insurers); and
  • implementing and continuously improving security information and event management (SIEM) systems and processes.

Measures like these to continuously improve and prioritize: (a) will help prevent a cybersecurity incident from occurring in the first place; and (b) if one occurs, will help the victim organization of cybertheft defend against plaintiffs’ arguments that the organization failed to use reasonable cybersecurity measures.

  1. Audit and adjust uses of website advertising technologies

In 2023, plaintiffs filed over 250 class actions alleging that Meta Pixel, Google Analytics, and other similar software embedded in defendants’ websites secretly captured plaintiffs’ web browsing data and sent it to Meta, Google, and other online advertising agencies, respectively.  This software, often called website advertising technologies or “adtech” (and often referred to by plaintiffs as “tracking technologies”) is a common feature on many websites in operation today — millions of companies and governmental organizations have it.[5]  These lawsuits generally allege that the organization’s use of adtech violated federal and state wiretap statutes, consumer fraud statutes, and other laws, and often seek hundreds of millions of dollars in statutory damages.  The businesses targeted in these cases so far mostly have been healthcare providers but also span nearly every industry including retailers, consumer products, and universities.

Several of these cases have resulted in multimillion-dollar settlements, several have been dismissed, and the vast majority remain undecided.  The legal landscape in this area has only begun to develop under many plaintiffs’ theories of liability, statutes, and common laws.  The adtech alleged has included not only Meta Pixel and Google Analytics but also dozens of the hundreds or thousands of other types of adtech.  All this legal uncertainty multiplied by requested statutory damages equals serious business risk to any organization with adtech on its public-facing website(s).

An organization may not know that adtech is present on its public-facing websites.  It could have been installed on a website by a vendor without proper authorization, for example, or as a default without any human intent by using some web publishing tools.

Organizations should consider whether to have an audit performed before any litigation arises as to which adtech is or has been installed on which web pages when and which data types were transmitted as a result.  Multiple experts specialize in such adtech audits and serve as expert witnesses should any litigation arise.  An adtech audit is relatively quick and inexpensive and it might be cost-beneficial for an organization to perform an adtech audit before litigation arises because: (a) it might convince an organization to turn off some of its unneeded adtech now, thereby cutting off any potential damages relating to that adtech in a future lawsuit; (b) in the event of a future lawsuit, such an audit would not be wasted — it is one of the first things adtech defendants typically perform upon being served with an adtech lawsuit; and (c) an adtech audit could assist in presently updating and modernizing website terms of use, data privacy policies, and vendor agreements (next topic).

  1. Update and modernize website terms of use, data privacy policies, and vendor agreements

Organizations should consider whether to modify their website terms of use and data privacy policies to describe the organization’s use of adtech in additional detail.  Doing so could deter or help defend a future adtech class action lawsuit similar to the many that are being filed today, alleging omission of such additional details, raising claims brought under various states’ consumer fraud acts, and seeking multimillion-dollar statutory damages.

Organizations should consider adding to contracts with website vendors and marketing vendors clauses that prohibit the vendor from incorporating any unwanted adtech into the organization’s public-facing websites.  That could help disprove the element of intent at issue in many claims brought under the recent explosion of adtech lawsuits.

Implications For Corporations: Implementation of these best practices is critical to mitigating risk and saving litigation dollars.  Click to learn more about the services Duane Morris provides in the practice areas of Class Action Litigation; Arbitration, Mediation, and Alternative Dispute Resolution; Cybersecurity; Privacy and Data Protection; Healthcare Information Technology; and Privacy and Security for Healthcare Providers.

 

 

[1] In 2015, for example, a large study found that of 33 banks that had engaged in practices relating to debit card overdrafts, 18 endured class actions and ended up paying out $1 billion to 29 million customers, whereas 15 had arbitration clauses and did not endure any class actions.  See Consumer Protection Financial Bureau (CPFB), Arbitration Study: Report to Congress, Pursuant to Dodd-Frank Wall Street Reform and Consumer Protection Act § 1028(a) at Section 8, available at https://files.consumerfinance.gov/f/201503_cfpb_arbitration-study-report-to-congress-2015.pdf.  These 15 with arbitration clauses paid almost nothing—less than 30 debit card customers per year in the entire nation filed any sort of arbitration dispute regarding their cards during the relevant timeframe.  See id. at Section 5, Table 1.  Another study of AT&T from 2003-2014 found similarly, concluding, “Although hundreds of millions of consumers and employees are obliged to use arbitration as their remedy, almost none do.”  Judith Resnik, Diffusing Disputes: The Public in the Private of Arbitration, the Private in Courts, and the Erasure of Rights, 124 Yale L.J. 2804 (2015).

[2] AAA, Consumer Mass Arbitration and Mediation Fee Schedule (amended and effective Jan. 15, 2024), available at https://www.adr.org/sites/default/files/Consumer_Mass_Arbitration_and_Mediation_Fee_Schedule.pdf; JAMS, Arbitration Schedule of Fees and Costs, available at https://www.jamsadr.com/arbitration-fees.

[3] J. Maria Glover, Mass Arbitration, 74 Stan. L. Rev. 1283, 1387 & Table 2 (2022).

[4] See, e.g., BuzzFeed Media Enters., Inc. v. Anderson, 2024 WL 2187054, at *1 (Del. Ch. May 15, 2024) (dismissing action to enjoin mass arbitration of claims brought by employees); Hoeg v. Samsung Elecs. Am., Inc., No. 23-CV-1951 (N.D. Ill. Feb. 2024) (ordering defendant of BIPA claims brought by consumers to pay over $300,000 in AAA filing fees); Wallrich v. Samsung Elecs. Am., Inc., 2023 WL 5935024 (N.D. Ill. Sept. 12, 2023) (ordering defendant of BIPA claims brought by consumers to pay over $4 million in AAA fees); Uber Tech., Inc. v. AAA, 204 A.D.3d 506, 510 (N.Y. App. Div. 2022) (ordering defendant of reverse discrimination claims brought by customers to pay over $10 million in AAA case management fees).

[5] See, e.g., Customer Data Platform Institute, “Trackers and pixels feeding data broker stores,” reporting “47% of websites using Meta Pixel, including 55% of S&P 500, 58% of retail, 42% of financial, and 33% of healthcare” (available at https://www.cdpinstitute.org/news/trackers-and-pixels-feeding-data-broker-data-stores/); builtwith, “Facebook Pixel Usage Statistics,” offering access to data on over 14 million websites using the Meta Pixel, stating, “We know of 5,861,028 live websites using Facebook Pixel and an additional 8,181,093 sites that used Facebook Pixel historically and 2,543,263 websites in the United States” (available at https://trends.builtwith.com/analytics/Facebook-Pixel).

Webinar Replay: Privacy Class Action Litigation Trends

Duane Morris Takeaways: The significant stakes and evolving legal landscape in privacy class action rulings and legislation make the defense of privacy class actions a challenge for corporations. As a new wave of wiretapping violation lawsuits target companies that use technologies to track user activity on their websites, there is significant state legislative activity regarding data privacy across the country. In the latest edition of the Data Privacy and Security Landscape webinar series, Duane Morris partners Jerry Maatman, Jennifer Riley, and Colin Knisely provide an in-depth look at the most active area of the plaintiffs’ class action bar over the past year.

The Duane Morris Class Action Defense Group recently published its desk references on privacy and data breach class action litigation, which can be viewed on any device and are fully searchable with selectable text. Bookmark or download the e-books here: Data Breach Class Action Review – 2024 and Privacy Class Action Review – 2024.

SB 2979 – Illinois Biometric Privacy Act Legislation Passes The Illinois Senate

By Gerald L. Maatman, Jr., Alex W, Karasik, and George J. Schaller

Duane Morris Takeaways: On April 11, 2024, the Illinois Senate passed Senate Bill 2979 (the “Bill”) by vote of 46 to 13. The Bill introduces legislation that would amend the Biometric Information Privacy Act (“BIPA”) to limit claims accrued to one violation of the BIPA in stark contrast to the statute’s current “per-collection basis.” The Bill’s proposed revisions are accessible here and the status of the Bill can be tracked here. For any companies involved in privacy class action litigation, the proposed legislations is exceedingly important.

Background On The BIPA

The BIPA currently provides for “a violation for every scan,” based on the Illinois Supreme Court’s decision in Cothron v. White Castle Sys., 2023 IL 128004 (Feb. 17, 2023).  In Cothron, the Illinois Supreme Court held that “the plain language of §§ 15(b) and 15(d) shows that a claim accrues under the Act with every scan or transmission of biometric identifiers or biometric information without prior informed consent.” Id. at ¶ 45.

The majority of the Illinois Supreme Court opined that any policy-based concerns “about potentially excessive damage awards under the Act are best addressed by the legislature.” Id. at ¶ 43.

On January 31, 2024, Senator Bill Cunningham introduced SB 2979 to the Illinois Senate.

The Proposed Revisions To The BIPA Under SB 2979

The Bill’s proposed revisions articulate two key amendments regarding: (1) the “every scan” violation under §§ 15(b) and 15(d); and (2) an additional definition for “electronic signature” that augments the BIPA’s current “Written release” definition.

For violations under §§ 15(b) and 15(d), the Bill endeavors to limit alleged violations of the BIPA to a “single violation” for these respective sections.

The Bill narrows an aggrieved person’s entitled recovery to “at most, one recovery under this section,” provided that biometric identifier or biometric information was obtained from the same person using the same method of collection.  See SB 2979, 740 ILCS 14/20(b).  Similar single violation language is proposed under sub-section (d) of § 15 on the BIPA’s dissemination provision.  See SB 2979, 740 ILCS 14/20(c).

Also included in the Bill is a new definition for ‘electronic signature’ as “an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.” See SB 2979, 740 ILCS 14/10.  This definition is then incorporated to the BIPA’s already defined “Written release.”  See id.

As of April 25, 2024, the Bill advanced to Illinois General Assembly’s House of Representatives and is assigned to the Judiciary – Civil Committee.

Implications For Employers

Employers should monitor SB 2979 closely as it progresses through the Illinois House of Representatives.  The unfettered potential damages from BIPA claims may be limited to a single scan if the Bill passes.  This would be a major and much-needed legislative coup for businesses with operations in Illinois who utilize biometric technology.

Pennsylvania Federal Court Dismisses Data Privacy Class Action Based On Lack Of Standing

By Gerald L. Maatman, Jr., Jesse S. Stavis, and Ryan T. Garippo

Duane Morris Takeaways: On April 5, 2024, Judge Marilyn J. Horan of the U.S. District Court for the Western District of Pennsylvania granted defendant Spirit Airlines’ motion to dismiss in Smidga et al. v. Spirit Airlines, No: 2:22-CV-0157 (W.D. Pa. Apr. 5, 2024). Plaintiffs alleged that Spirit had invaded their privacy and violated state wiretapping laws by recording data regarding visits to Spirit’s website, but the Court held that they failed to plead a concrete injury sufficient to establish Article III standing. The ruling should serve as a reminder of the importance of considering challenges to standing, particularly in data privacy class actions where alleged injuries are often abstract and speculative.

Case Background

Like many companies, Spirit Airlines uses session replay code to track users’ activity on its website in order to optimize user experience. Session replay code allows a website operator to track mouse movements, clicks, text entries, and other data concerning a visitor’s activity on a website. According to Spirit, all data that is collected is thoroughly anonymized.

The plaintiffs in this putative class action alleged that Spirit violated numerous state wiretapping and invasion of privacy laws by recording their identities, travel plans, and contact information. One of the plaintiffs also alleged that she had entered credit card information into the website. All three plaintiffs claimed that the invasion of privacy had caused them mental anguish and suffering as well as lost economic value in their information.

Spirit moved to dismiss based on a lack of standing under Rule 12(b)(1) and failure to state a claim under Rule 12(b)(6).

The Court’s Ruling

The Court dismissed all claims without prejudice. It held that the plaintiffs had failed to establish standing. Under Article III of the U. S. Constitution, a plaintiff must establish that he or she has standing to sue in order to proceed with a lawsuit. The standing analysis asks whether: “(1) the plaintiff suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1547 (2016).

Spirit argued that the plaintiffs had failed to identify an injury in fact because they did not suffer any concrete injury from the recording of session data. The court accepted this argument, noting that absent a concrete injury, a violation of a statute alone is insufficient to establish standing: “Congress [or a state legislature] may not simply enact an injury into existence, using its lawmaking power to transform something that is not remotely harmful into something that is.” Smidga et. al v. Spirit Airlines, Inc., No. 2:22-CV-1578, 2024 WL 1485853, at *3 (W.D. Pa. Apr. 5, 2024) (internal citations and quotation marks omitted).

Judge Horan cited over fifteen recent cases where federal courts denied standing in similar circumstances to demonstrate that the mere recording of anonymized data does not satisfy the constitutional standing requirement. Further, the Court reasoned that a website’s “collection of basic contact information” is also insufficient. Id. at *4. However, the Court did note that recording credit card data without a user’s authorization might be sufficient to establish standing. Id. at *5. In Smidga, one plaintiff alleged that she had entered her credit card information, but Spirit insisted that no personally identifying information had been stored. Because plaintiffs bear the burden to prove standing, the Court found that the mere assertion that a plaintiff entered her credit card information into a website was — absent allegations that her personalized data was tied to that information — insufficient to confer Article III standing.

Having dismissed the case for lack of standing, the Court did not analyze Spirit’s arguments under Rule 12(b)(6) for failure to state a claim. The court did, however, grant the plaintiffs leave to amend their complaint.

Implications For Companies

The success or failure of a class action often comes down to whether the putative class can achieve certification under Rule 23. Nonetheless, Rule 23 challenges are not the only weapon in a defendant’s arsenal. Indeed, a Rule 12(b)(1) challenge to standing is often an effective and efficient way to quickly dispose of a claim. This strategy is a particularly potent defense in the data privacy space, as the harms that are alleged in these cases are often abstract and speculative. The ruling in Smidga shows that even if a defendant allegedly violated a state privacy or wiretapping law, a plaintiff must still demonstrate that he or she has actually been harmed.

The Class Action Weekly Wire – Episode 46: 2024 Preview: Privacy Class Action Litigation


Duane Morris Takeaway:
This week’s episode of the Class Action Weekly Wire features Duane Morris partner Jennifer Riley, special counsel Brandon Spurlock, and associate Jeff Zohn with their discussion of 2023 developments and trends in privacy class action litigation as detailed in the recently published Duane Morris Privacy Class Action Review – 2024.

Check out today’s episode and subscribe to our show from your preferred podcast platform: Spotify, Amazon Music, Apple Podcasts, Google Podcasts, the Samsung Podcasts app, Podcast Index, Tune In, Listen Notes, iHeartRadio, Deezer, YouTube or our RSS feed.

Episode Transcript

Jennifer Riley: Welcome to our listeners, thank you for being here for our weekly podcast, the Class Action Weekly Wire. I’m Jennifer Riley, partner at Duane Morris, and joining me today is special counsel Brandon Spurlock and associate Jeffrey Zohn. Thank you for being on the podcast, guys.

Brandon Spurlock: Thank you, Jen, happy to be part of the podcast.

Jeff Zohn: Thanks, Jen, I am glad to be here.

Jennifer: Today on the podcast we are discussing the recent publication of this year’s edition of the Duane Morris Privacy Class Action Review. Listeners can find the eBook publication on our blog, the Duane Morris Class Action Defense Blog. Brandon, can you tell our listeners a little bit about our new publication?

Brandon: Yeah, sure, Jen, the last year saw a virtual explosion of privacy class action litigation. As a result, compliance with privacy laws in the myriad ways that companies interact with employees, customers, and third parties is a corporate imperative. To that end, the class action team at Duane Morris is pleased to present the Privacy Class Action Review – 2024. This publication analyzes the key privacy-related rulings and developments in 2023, and the significant legal decisions and trends impacting privacy class action litigation for 2024. We hope the companies and employers will benefit from this resource. Their compliance with these evolving laws and standards

Jennifer: In the rapidly evolving privacy litigation landscape, it is crucial for businesses to understand how courts are interpreting these often ambiguous privacy statutes. In 2023, courts across the country issued a mixed bag of results leading to major victories for both plaintiffs and defendants. Jeff, what were some of the takeaways from the publication with regard to litigation in this area in 2023?

Jeff: Yeah, you’re absolutely right that there was a mixed bag of results – both defendants and plaintiffs can point to major BIPA victories in 2023. This past year will definitely be remembered for some of the landmark pro-plaintiff rulings that will provide the plaintiffs’ bar with more than enough ammunition to keep BIPA litigation in the headlines for the foreseeable future. Specifically in 2023, the Illinois Supreme Court issued two seminal decisions that increase the opportunity for recovery of damages under BIPA, including Tims, et al. v. Black Horse Carriers, which held a five-year statute of limitations applies to claims under BIPA, and Cothron, et al. v. White Castle System, Inc., which held that a claim accrues under the BIPA each time a company collects or discloses biometric information.

Jennifer: Two major rulings indeed. Brandon, what do you anticipate these rulings will mean for privacy class actions in 2024?

Brandon: Sure, Jen. These rulings have far-reaching implications together. They have the potential to increase monetary damages in BIPA class actions in an exponential manner, especially in employment context, where employees may scan in and out of work multiple times per day across more than 200 workdays per year. In 2023, in the wake of these rulings, class action filings more than doubled. We anticipate that the high volume of case filings will continue at 2024.

Jeff: I think it’s important to add that even though BIPA is an Illinois state statue, various other states are continuing to consider proposed copycat statutes that follow the lead of Illinois. The federal government likewise continues to consider proposals for a national statute. These factors have transformed biometric privacy compliance into a top priority for businesses nationwide and have promoted privacy class actions to the top of the list of litigation risks facing business today. If other states succeed in enacting similar statutes, businesses can expect similar surges in those States as the filing numbers of Illinois continue their upward trend.

Jennifer: Thanks so much for that information – all very important for companies navigating the privacy class action regulations and statutes. The Review also talks about the top privacy settlements in 2023. How did plaintiffs do in securing settlement funds last year?

Brandon: Plaintiffs did very well in securing high dollar settlements. In 2023, the top 10 privacy settlements totaled $1.32 billion. This was a significant increase over 2022, when the top 10 privacy class action settlements totaled still a high number, but just almost $900 million. Specific to BIPA litigation settlements, the top 10 BIPA class action settlements totaled almost $150 million dollars in 2023.


Jennifer: Thank you. We will continue to track those settlement numbers in 2024 as record breaking settlement amounts have been a huge trend that we have tracked over the past two years. Thank you to Brandon and Jeff for being here today, and thank you to the loyal listeners for tuning in. Listeners, please stop by the blog for a free copy of the Privacy Class Action Review eBook.

Jeff: Thank you for having me, Jen, and thank you to all of our listeners.

Brandon: Thanks so much, everyone.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress