New York Federal Court’s OpenAI Discovery Orders Provide Key Insights For Companies Navigating AI Preservation Standards

By Gerald L. Maatman, Jr., Justin Donoho, and Hayley Ryan

Duane Morris Takeaways: In a series of discovery rulings in the case of In Re OpenAI, Inc. Copyright Infringement Litigation, No. 23 Civ. 11195 (S.D.N.Y.), Magistrate Judge Ona T. Wang issued a series of orders that signal how courts are likely to approach AI data, privacy, and discovery obligations. Judge Wang’s orders illustrate the growing tension between AI system transparency and data privacy compliance – and how courts are trying to balance them.

For companies that develop or use AI, these rulings highlight both the risk of expansive preservation demands and the opportunity to share proportional, privacy-conscious discovery frameworks. Below is an overview of these decisions and the takeaways for in-house counsel, privacy officers, and litigation teams.

Background

In May 2025, the U.S. District Court for the Southern District of New York issued a preservation order in a copyright action challenging the use of The New York Times’ content to train large language models. The order required OpenAI to preserve and segregate certain output log data that would otherwise be deleted. Days later, the Court denied OpenAI’s motion to reconsider or narrow that directive. By October 2025, however, the Court approved a negotiated modification that terminated OpenAI’s ongoing preservation obligations while requiring continued retention of the already-segregated data.

The Court’s Core Rulings

  1. Forward-Looking Preservation Now, Arguments Later

On May 13, 2025, the Court entered an order requiring OpenAI to preserve and segregate output log data that would otherwise be deleted, including data subject to user deletion requests or statutory erasure rights. See id., ECF No. 551. The rationale: once litigation begins, even transient data can be critical to issues like bias and representativeness. The Court stressed that it was too early to weigh proportionality, so preservation would continue until a fuller record emerged.

  1. Reconsideration Denied, Preservation Continues

A few days later, when OpenAI sought reconsideration or modification of preservation order, the Court denied the request without prejudice. Id., ECF No. 559. The Court noted that it was premature to decide proportionality and potential sampling bias until additional information was developed.

  1. A Negotiated “Sunset” and Privacy Carve-Outs

By October 2025, the parties agreed to wind down the broad preservation obligation. On October 9, 2025, the Court approved a stipulated modification that ended OpenAI’s ongoing preservation duty as of September 26, 2025, limited retention to already-segregated logs, excluded requests originating from the European Economic Area, Switzerland, and the United Kingdom for privacy compliance, and added targeted, domain-based preservation for select accounts listed in an appendix. Id., ECF No. 922.

This evolution — from blanket to targeted, time-limited preservation — shows courts’ willingness to adapt when parties document technical feasibility, privacy conflicts, and litigation need.

Implications For Companies

  1. Evidence vs. Privacy: Courts Expect You to Reconcile Both

These rulings show that courts will not accept “privacy law conflicts” as a stand-alone excuse to delete potentially relevant data. Instead, companies must show they can segregate, anonymize, or retain data while maintaining compliance. The OpenAI orders make clear: when evidence may be lost, segregation beats destruction.

  1. Proportionality Still Matters

Even as courts push for preservation, they remain attentive to proportionality. While early preservation orders may seem sweeping, judges are open to refining them once the factual record matures. Companies that track the cost, burden, and privacy impact of compliance will be best positioned to negotiate tailored limits.

  1. Preservation Is Not Forever

The October 2025 stipulation illustrates how to exit an indefinite obligation: offer targeted cohorts, geographic exclusions, and sunset provisions supported by a concrete record. Courts will listen if you bring data, not just arguments.

A Playbook for In-House Counsel

  1. Map Your AI Data Universe

Inventory all AI-related data exhaust: prompts, outputs, embeddings, telemetry, and retention settings. Identify controllers, processors, and jurisdictions.

  1. Build “Pause” Controls

Design systems capable of segregating or pausing deletion by user, region, or product line. This technical agility is key when a preservation order issues.

  1. Update Litigation Hold Templates for AI

Traditional holds miss ephemeral or system-generated data. Draft holds that instruct teams how to pause automated deletion while complying with privacy statutes.

  1. Propose Targeted Solutions

When facing broad discovery demands, offer alternatives: limit by time window, geography, or user cohort. Courts will accept reasonable, well-documented compromises.

  1. Build Toward an Off-Ramp

Preservation obligations can sunset — but only if supported by metrics. Track preserved volumes, costs, and privacy burdens to justify targeted, defensible limits.

Conclusion

The OpenAI orders reflect a new judicial mindset: preserve broadly first, negotiate smartly later. AI developers and data-driven businesses should expect similar directives in future litigation. Those that engineer for preservation flexibility, document privacy compliance, and proactively negotiate scope will avoid the steep costs of one-size-fits-all discovery — and may even help set the industry standard for balanced AI litigation governance.

Robo Boss Rejection: California Governor Newsom Pulls The Plug On AI Bill For Overly Broad Restrictions

By Alex. W. Karasik, Brian L. Johnsrud, and George J. Schaller

Duane Morris Takeaways:  On October 13, 2025, California Governor Gavin Newsom, issued a written statement declining to sign Senate Bill 7 – called the “No Robo Bosses” Act (the “Act”).  While the Act aimed to restrict when and how employers could use automated decision-making systems and artificial intelligence, Governor Newsom rejected the proposed legislation in terms of the Act’s broad drafting and unfocused notification requirements.  Governor Newsom’s statement reflects an initial rebuttal to a wave of pending AI regulations as states wrestle with suitable AI guidance.  Given the pro-employee tendencies of Governor Newsom and California regulators generally, this outcome is a mild surprise.  Employers nonetheless should expect continued scrutiny of AI regulations before enactment.

This legislative activity surely sets the stage for what many believe is the next wave of class action litigation.

Overview Of SB 7: The “No Robo Bosses” Act

The Act was first introduced in December 2024.  After several amendments, it was passed by the Senate Committee on September 23, 2025 for review and signature by Governor Newsom.  The Act’s key proposals included prohibitions on employers solely using AI to make disciplinary or termination decisions, requiring human input for AI disciplinary or termination decisions, detailed advance notice requirements for use of AI in hiring or employment-related decisions, and post-notice requirements if an employer primarily relied on AI for disciplinary or termination decisions. 

The Act focused on automated-decision making systems (“ADS”) and “employment-related decisions.”  Under the Act, an ADS is defined as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decision making and materially impacts natural personals.”  With this definition, ADS incorporated a swath of technologies utilized by many employers such as call analytic tools, automated scheduling platforms, keystroke and computer monitoring software, and AI-based training programs.  SB 7 also defined “employment-related decisions” as “any decision by an employer that materially impacts a worker’s wages, benefits, compensation, work hours, work schedule, performance evaluation, hiring, discipline, promotion, termination, job tasks, skill requirements, work responsibilities, assignment of work, access to work training opportunity, productivity requirements, or workplace health or safety.” 

The Act also incorporated various pre-notice and post-notice requirements.  Employers using an ADS system to make employment-related decisions (excluding hiring) would have been required to provide “pre-notice” at least 30-days before deploying an ADS, and 30-day notice to new hires for any ADS use.  Similarly, the Act included “post-notice” provisions regarding post-notices when an employer relied on an ADS to make a discipline, termination, or deactivation decision, and to provide the impacted worker with notice at the time the employment decision is made.  Both notices had requirements for the notice to be written in plain language, directed as a routine worker communication, and provided in an accessible format. 

Violations under the “No Robo Bosses” Act included a proposed civil penalty of $500 per violation, with enforcement authority vested in the Labor Commissioner and public prosecutors of California.  The proposed Act did not include a private right of action.  

Governor Newsom’s Veto Of The Act

Governor Newsom’s veto of the Act centered on concerns of unspecified misuses of ADS technology and unfocused notification requirements.  Governor Newsom did recognize the concerns associated with ADS in employment-making decisions but argued the Act’s “proposed solution fail[ed] to directly address incidents of misuse.”  He also found that the restrictions embedded in the Act were broad and removed “a potentially valuable tool” when ADS systems are properly applied and properly employed.  Governor Newsom’s critique of the Act demonstrates that the Act did not distinguish the benefits of ADS systems compared to risks associated with ADS use cases.   Accordingly, Governor Newsom vetoed SB 7.

Implications Of The Veto

California employers do not have to mitigate their ADS systems yet based on Governor Newsom’s veto of SB 7, but given the Governor’s comments, its possible new legislation will be introduced to narrow the use of ADS systems in employment decisions.  Governor Newsom’s veto of the Act further represents a growing concern among ADS systems and AI technologies legislative policies – namely that broad legislative efforts cannot efficiently or effectively address emerging technologies.  While employers can expect other states may propound ADS and AI legislation in the context of employment decision-making, employers should consider that if the notoriously pro-employee State of California struck down legislation as overly broad and unfocused – it may take some time for other jurisdictions to determine how to finesse the legislative landscape.

Employers should continue to monitor federal developments in this area, as well.  In July 2023, the federal “No Robot Bosses Act,” S.2419, was introduced in the Senate.  While the bill has not been enacted, its provisions include similar limitations on the use of automated systems and would require human oversight before an automated decision is finalized.

A Recap Of The R.I.S.E. AI Conference At University Of Notre Dame 

By Alex W. Karasik

Duane Morris Takeaways Artificial Intelligence has brilliantly transformed society to the point where no industry can fully separate from its impact. But the fruits of this technology must be carefully curated to ensure that its adoption is ethical.  An evolving legislative landscape and billion-dollar class action litigation industry loom large. 

This week, at the University of Notre Dame’s inaugural R.I.S.E. AI Conference in South Bend, Indiana, Partner Alex W. Karasik of the Duane Morris Class Action Defense Group was a panelist at the highly anticipated session, “Challenges And Opportunities For Responsible Adoption Of AI.”  The Conference, which had over 300 attendees from 16 countries, produced excellent dialogues on how cutting-edge technologies can both solve and create problems, including class action litigation.

The Conference covered a wide range of global issues affected by AI.  Some of the topics included AI’s impact on data privacy, information governance, healthcare, education, voting, and its overall impact on Latin America – including discussions about how large language models are developing when machines are trained in non-English languages.  For organizations who deploy this technology, or are thinking about doing so, the Conference was informative in terms of AI’s utility and risk.  The sessions provided valuable insight from a broad range of constituents, including business leaders, world-renowned academic scholars, technology professionals – and a lawyer from Chicago. 

I had the privilege of discussing AI’s integration into the workplace in two areas: (1) proactive implementation; and (2) reactive class action litigation risk. There is no “one-size-fits-all” checklist for organizations to incorporate AI.  But there are several overarching principles that will likely be important factors when establishing an ethical and legally compliant AI framework. These include: (1) creating an AI steering committee with a diverse collection of viewpoints, including Legal, HR, IT, business operations, and other end-users – such as tech-savvy employees – who can collectively opine on the benefits and concerns of AI in the workplace; (2) crafting a robust yet unambiguous policy to ensure that all members of an organization as using AI responsibly and consistently; (3) implementing training programs for both managers and employees on how to equitably implement the AI policy, and understand its interplay with other policies such as EEO; (4) communicating with AI vendors to understand how AI models were trained; and (5) conducting audits before and after implementation to ensure AI use does not result in a disparate impact on certain demographics of applicants or employees.

From a litigation perspective, I discussed the “moving target” of AI laws popping up around the country, which may create compliance challenges.  While most of these laws are guided by the same fundamental principles (i.e. transparency and disclosure when AI is being used in the hiring process), accounting for minor variations may ultimately present compliance challenges for employers with national and international operations.  The class action litigation and EEOC-initiated systemic discrimination litigation will inevitably follow — as the EEOC v. iTutorGroup, Inc., et al., Case No. 1:22-CV-02565 (E.D.N.Y.) settlement (see our blog post) and currently pending Mobley v. Workday, Inc., Case No. 3:23-CV-00770 (N.D. Cal.) class action lawsuit (see our blog post) confirm.

Overall, I was amazed by the amount of business and academic talent at the Conference.  The Conference was an incubator for issue-spotting, brainstorming, and problem-solving.  I am grateful for the opportunity to learn about the statistical impact of AI on organizations – and thankful to my many new PhD friends for sharing explanations of their empirical studies.  Looking forward, I am optimistic that when constituents from all over the world in a variety of professions collaborate together, we will responsibly unlock AI’s greatest potential.

For more information about Duane Morris’s endeavors in the Artificial Intelligence space, please visit our Firm’s AI webpage here.

California Adopts New Rules Expanding The FEHA’s Reach To AI Tool Developers

By Gerald L. Maatman, Jr., Justin Donoho, and George J. Schaller

Duane Morris Takeaways: On October 1, 2025, California’s “Employment Regulations Regarding Automated-Decision Systems” will take effect.  These new AI employment regulations can be accessed here.  The regulations add an “agency” theory under the California Fair Employment and Housing Act (FEHA) and formalize this theory’s applicability to AI tool developers and companies employing AI tools that facilitate human decision making for recruitment, hiring, and promotion of job applicants and employees.  With California’s inclusion of a private right of action under the FEHA, these new AI employment regulations may augur an uptick in AI employment tool class actions brought under the FEHA.  This blog post identifies key provisions of this new law and steps employers and AI tool developers can take to mitigate FEHA class action risk.

Background 

In the widely-watched class action captioned Mobley v. Workday, No. 23-CV-770 (N.D. Cal.), the plaintiff alleges that an AI tool developer’s algorithm-based screening tools discriminated against job applicants on the basis of race, age, and disability in violation of Title VII of the Civil Rights Act of 1964 (“Title VII”), the Age Discrimination in Employment Act of 1967 (“ADEA”), the Americans with Disabilities Act Amendments Act of 2008 (“ADA”), and California’s FEHA.  Last year the U.S. District Court for the Northern District of California denied dismissal of the Title VII, ADEA, and ADA disparate impact claims on the theory that the developer of the algorithm was plausibly alleged to be the employer’s agent, and dismissed the FEHA claim which was brought only under the then-available theory of intentional aiding and abetting (as we previously blogged about here).

In recent years, discrimination stemming from AI employment tools has been addressed by other state and local statutes, including Colorado’s AI Act (CAIA) setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination,” New York City’s law regarding the use of automated employment decision tools, the Illinois AI Video Interview Act, and the 2024 amendment to the Illinois Human Rights Act (IHRA) to regulate the use of AI, with only the last of these laws providing for a private right of action (once it becomes effective January 1, 2026).

Key Provisions Of California’s AI Employment Regulations

California’s AI employment regulations amend and clarify how the FEHA applies to AI employment tools, thus constituting a new development in case theories available to class action plaintiffs regarding alleged harms stemming from AI systems and algorithmic discrimination.  

Employers and AI employment tool developers should take note of key provisions codified by California’s new AI employment regulations, as follows:

  • Agency theory.  An “agency” theory is added under the FEHA like the one that allowed the plaintiff in Mobley v. Workday to proceed past a motion to dismiss on his federal claims, whereby an AI tool developer may face litigation risk for developing algorithms that result in a disparate impact when the tool is used by an employer.  While Mobley v. Workday continues to proceed in the trial court, no appellate authority has yet had occasion to address the “agency” theories being litigated in that case under federal antidiscrimination statutes.  However, with the California AI employment regulations taking effect October 1, 2025, that theory is now expressly codified under the FEHA.  2 Cal. Code Regs § 11008(a).
  • Proxies for discrimination.  The regulations clarify that it is unlawful to use an employment tool algorithm that discriminates by using a “proxy,” which the regulations define as a “characteristic or category closely correlated with a basis protected by the Act.”  Id. §§ 11008(a), 11009(f).  While the regulations do not explicitly identify any proxies, proxies that have been identified in literature by the EEOC’s former Chief Analyst include zip code (this proxy is also codified in the IHRA), first name, alma mater, credit history, and participation in hobbies or extracurricular activities.
  • Anti-bias testing.  The regulations state that relevant to a claim of employment discrimination or an available defense are “anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such efforts, the results of such testing or other effort, and the response to the results.”  Id. § 11020(b).  Thus, for example, adoption of the NIST’s AI risk management framework, itself codified as a defense under the CAIA, could be a factor to consider as a defense under the FEHA.  Many other factors are pertinent with respect to anti-bias testing, including auditing, tuning, and the use of various interpretability methods and fairness metrics, discussed in our prior blog entry and article on this subject (here).
  • Data retention.  The regulations provide that employers, employment agencies, labor organizations, and apprenticeship training programs must maintain employment records, including automated-decision data, for a minimum of four years.  Id. § 11013(c).

Implications For Employers

California’s AI employment regulations increase employers’ and AI tool developers’ risks of facing class action lawsuits similar to Mobley v Workday and/or alleging discrimination under the FEHA.  However, developers and employers have several tools at their disposal to mitigate AI employment tool class action risk.  One is to ensure that AI employment tools comply with the FEHA provisions discussed above and with other antidiscrimination statutes.  Others include adding or updating arbitration agreements to mitigate the risks of mass arbitration; collaborating with IT, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and updating notices to third parties and vendor agreements.

Best Practices To Mitigate The Risk Of Class Action Litigation Over AI Pricing Tool Noncompliance With Antitrust And AI Statutes

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Ten Design Guidelines to Mitigate the Risk of AI Pricing Tool Noncompliance with the Federal Trade Commission Act, Sherman Act, and Colorado AI Act.”  The article is available here and is a must-read for corporate counsel involved with development or deployment of AI pricing tools.

While artificial intelligence (AI) pricing tools can improve revenues for retailers, suppliers, hotel operators, landlords, ride-hailing platforms, airlines, ticket distributors, and more, designers and deployers of such tools increasingly face the risk of being targeted in lawsuits brought by governmental bodies and class action plaintiffs alleging unfair methods of competition in violation of the Federal Trade Commission (FTC) Act and agreements that restrain trade in violation of the federal Sherman Act.  This article identifies recently emerging trends in such lawsuits, including one currently on appeal in the U.S. Court of Appeals for the Third Circuit and three pending in district courts, draws common threads, and discusses ten guidelines that AI pricing tool designers should consider to mitigate the risk of noncompliance with the FTC Act, the Sherman Act, and Colorado AI Act.

Implications For Corporations

AI pricing tools designed to comply with antitrust and AI laws face fewer risks than those not designed for compliance, of an expensive class action lawsuit or government-initiated proceeding alleging violation of such laws.  Moreover, by enabling and automating informed pricing decisions, AI pricing tools hold the potential to drive market efficiencies.  This article identifies best practices to assist with such compliance and, relatedly, such market efficiencies.

Best Practices To Mitigate The Risk Of AI Hiring Tool Noncompliance With Antidiscrimination Statutes

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Five Human Best Practices to Mitigate the Risk of AI Hiring Tool Noncompliance with Antidiscrimination Statutes.”  The article is available here and is a must-read for corporate counsel involved with development or deployment of AI hiring tools.

While artificial intelligence (AI) hiring tools can improve efficiencies in human resource functions, such as candidate sourcing, resume screening, interviewing, and background checks, AI has not replaced the need for humans to ensure that AI-assisted human resources (HR) practices comply with a wide range of antidiscrimination laws such as Title VII of the Civil Rights Act of 1964 (Title VII), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), the sections of Colorado’s AI Act setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination” (CAI), New York City’s law regarding the use of automated employment decision tools (NYC’s AI Law), the Illinois AI Video Act (IAIVA), and the 2024 amendment to the Illinois Human Rights Act to regulate the use of AI (IHRA).  This article identifies human best practices to mitigate the risk of companies’ AI hiring tools violating the foregoing statutes, according to the statutes, EEOC regulations, and scholarly sources authored by EEOC personnel and leading data scientists.

Implications For Corporations

AI hiring tools designed to comply with antidiscrimination statutes will comply.  Moreover, by eliminating some human decision-making and replacing it with carefully designed algorithms, AI holds the potential to substantially reduce the kind of bias that has been unlawful in the United States since the civil rights movement of the mid-twentieth century.  This article identifies human best practices to assist with such compliance and, relatedly, such potential substantial reduction of bias.

The FTC Issues Three New Orders Showing Its Increased 2024 Enforcement Activities Regarding AI And Adtech

By Gerald L. Maatman, Jr. and Justin R. Donoho

Duane Morris Takeaways: On December 3, 2024, the Federal Trade Commission (FTC) issued an order in In Re Intellivision Technologies Corp., (FTC Dec. 3, 2024) prohibiting an AI software developer from making misrepresentations that its AI-powered facial recognition software was free from gender and racial bias, and two orders in In Re Mobilewalla, Inc. (FTC Dec. 3, 2024), and In RE Gravy Analytics, Inc. (FTC Dec. 3, 2024), requiring data brokers to improve their advertising technology (adtech) privacy and security practices.  These three orders are significant in that they highlight that in 2024, the FTC has significantly increased its enforcement activities in the areas of AI and adtech.

Background

In 2024, the FTC brought and litigated at least 10 enforcement actions involving alleged deception about AI, alleged AI-powered fraud, and allegedly biased AI.  See the FTC’s AI case webpage located here.  This is a fivefold increase from the at least two AI-related actions brought by the FTC last year.  See id.  Just as private class actions involving AI are on the rise, so are the FTC’s AI-related enforcement actions.

This year the FTC also brought and litigated at least 21 enforcement actions categorized by the FTC as involving privacy and security.  See the FTC’s privacy and security webpage located here.  This is about twice the case activity by the FTC in privacy and data security cases compared with 2023.  See id.  Most of these new cases involve alleged unfair use of adtech, an area of recently increased litigation activity in private class actions, as well.

In short, this year the FTC officially achieved its “paradigm shift” of focusing enforcement activities on modern technologies and data privacy, as forecasted in 2022 by the FTC’s Director, Bureau of Consumer Protection, Samuel Levine, here.

All these complaints were brought by the FTC under the FTC Act, under which there is no private right of action.

The FTC’s December 3, 2024 Orders

In Intellivision, the FTC brought an enforcement action against a developer of AI-based facial recognition software embedded in home security products to enable consumers to gain access to their home security systems.  According to the complaint, the developer described its facial recognition software publicly as being entirely free of any gender or racial bias as shown by rigorous testing when, in fact, testing by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) showed that the software was not among the top 100 best performing algorithms tested by NIST in terms of error rates across different demographics, including region of birth and sex.  (Compl. ¶ 11.)  Moreover, according to the FTC, the developer did not possess any of its own testing to support its claims of lack of bias.  Based on these allegations, the FTC brought misrepresentation claims under the FTC Act.  The parties agreed to a consent order, in which the developer agreed to refrain from making any representations about the accuracy, efficacy, or lack of bias of its facial recognition technology, unless it could first substantiate such claims with reliable testing and documentation as set forth in the consent order.  The consent order also requires the developer to communicate the order to any of its managers and affiliated companies in the next 20 years, to make timely compliance reports and notices, and to create and maintain various detailed records, including regarding the company’s accounting, personnel, consumer complaints, compliance, marketing, and testing.

In Mobilewalla and Gravy Analytics, the FTC brought enforcement actions against data brokers who allegedly obtained consumer location data from other data suppliers and mobile applications and sold access to this data for purposes of online advertising without consumers’ consent.  According to the FTC’s complaints, the data brokers engaged in unfair collection, sale, use, and retention of sensitive location information, all in alleged violation of the FTC Act.  The parties agreed to consent orders, in which the data brokers agreed to refrain from collecting, selling, using, and retaining sensitive location information; to establish a Sensitive Location Data Program, Supplier Assessment Program, and a comprehensive privacy program, as detailed in the orders; provide consumers clear and conspicuous notice; provide consumers a means to request data deletion; delete location data as set forth in the order; and perform compliance, recordkeeping, and other activities, as set forth in the order.

Implications For Companies

The FTC’s increased enforcement activities in the areas of adtech and AI serve as a cautionary tale for companies using adtech and AI. 

As the FTC’s recent rulings and its 2024 dockets show, the FTC is increasingly using the FTC Act as a sword against alleged unfair use of adtech and AI.  Moreover, although the December 3 orders do not expressly impose any monetary penalties, the injunctive relief they impose may be costly and, in other FTC consent orders, harsher penalties have included express penalties of millions of dollars and, further, algorithmic disgorgement.  As adtech and AI continue to proliferate, organizations should consider in light of the FTC’s increased enforcement activities in these areas—and in light of the plaintiffs’ class action bar’s and EEOC’s increased activities in these areas, as well, as we blogged about here, here, here, here, and here—whether to modify their website terms of use, data privacy policies, and all other notices to the organizations’ website visitors and customers to describe the organization’s use of AI and adtech in additional detail.  Doing so could deter or help defend a future enforcement action or class action similar to the many that are being filed today, alleging omission of such additional details, and seeking a wide range of injunctive and monetary relief.

Announcing A New Journal Article By Justin Donoho Of Duane Morris Explaining Best Practices To Mitigate High-Stakes AI Litigation Risk

By Justin Donoho

Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Three Best Practices to Mitigate High-Stakes AI Litigation Risk.”  The article is available here and is a must-read for corporate counsel.

Organizations using AI-based technologies that perform facial recognition or other facial analysis, website advertising, profiling, automated decision making, educational operations, clinical medicine, generative AI, and more increasingly face the risk of being targeted by class action lawsuits and government enforcement actions alleging that they improperly obtained, disclosed, and misused personal data of website visitors, employees, customers, students, patients, and others, or that they infringed copyrights, fixed prices, and more. These disputes often seek millions or billions of dollars against businesses of all sizes. This article identifies recent trends in such varied but similar AI litigation, draws common threads, and discusses three best practices that corporate counsel should consider to mitigate AI litigation risk: (1) add or update arbitration clauses to mitigate the risks of mass arbitration; (2) collaborate with information technology, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and (3) update notices to third parties and vendor agreements.

Implications For Corporations

Companies using AI technologies face multimillion- or billion-dollar risks of litigation seeking statutory and common-law damages under a wide variety of laws, including privacy statutes, wiretap statutes, unfair and deceptive practices statutes, antidiscrimination statutes, copyright statutes, antitrust statutes, common-law invasion of privacy, breach of contract, negligence, and more.  This article analyzes litigation brought under these laws and offers corporate counsel three best practices to mitigate the risk of similar cases.

Announcing A New ABA Article By Duane Morris Partner Alex Karasik Explaining The EEOC’s Artificial Intelligence Evolution


By Alex W. Karasik

Duane Morris Takeaway: Available now is the recent article in the American Bar Association’s magazine “The Brief” by Partner Alex Karasik entitled “An Examination of the EEOC’s Artificial Intelligence Evolution.[1] The article is available here and is a must-read for all employers and corporate counsel!

In the aftermath of the global pandemic, employee hiring has become a major challenge for businesses across the country, regardless of industry or region. Businesses want to accomplish this goal in the most time- and cost-effective way possible. Employers remain in vigorous pursuit of anything that can give them an edge in recruiting, hiring, onboarding, and retaining the best talent. In 2023, artificial intelligence (AI) emerged as the focal point of that pursuit. The use of AI offers an unprecedented opportunity to facilitate employment decisions. Whether it is sifting through thousands of resumes in a matter of seconds, aggregating information about interviewees’ facial expressions, or generating data to guide compensation adjustments, AI has already had a profound impact on how businesses manage their human capital.

Title VII of the Civil Rights Act of 1964, which is the cornerstone federal employment discrimination law, does not contain statutory language specifically about the use of AI technologies, which did not emerge until several decades later. However, the U.S. Equal Employment Opportunity Commission (EEOC), the federal government agency responsible for enforcing Title VII, has made it a strategic priority to prevent and redress employment discrimination stemming from employers’ use of AI to make employment decisions regarding prospective and current employees.

Focusing on the EEOC’s pioneering efforts in this space, this article explores the risks of using AI in the employment context. First, the article examines the current litigation landscape with an in-depth case study analysis of the EEOC’s first AI discrimination lawsuit and settlement. Next, to figure out how we got here, the article travels back in time through the origins of the EEOC’s AI initiative to present-day outreach efforts. Finally, the article reads the EEOC’s tea leaves about the future of AI in the workplace, offering employers insight into how to best navigate the employment decision-making process when implementing this generation-changing technology.

Implications For Employers: Similar to the introduction of technologies such as the typewriter, computer, internet, and cell phone, there are, understandably, questions and resulting debates about the precise impact that AI will have on the business world, including the legal profession. To best adopt any new technology, one must first invest in understanding how it works. The EEOC has done exactly that over the last several years. The businesses that use AI software to make employment decisions must similarly make a commitment to fully understand its impact, particularly with regard to applicants and employees who are members of protected classes. The employment evolution is here, and those who are best equipped to understand the risks and rewards will thrive in this exciting new era.

[1] THE BRIEF ❭ Winter 2024 An Examination of the EEOC’s Artificial Intelligence Evolution VOLUME 53, NUMBER 2, WINTER 2024. © 2024 BY THE AMERICAN BAR ASSOCIATION. REPRODUCED WITH PERMISSION. ALL RIGHTS RESERVED. THIS INFORMATION OR ANY PORTION THEREOF MAY NOT BE COPIED OR DISSEMINATED IN ANY FORM OR BY ANY MEANS OR STORED IN AN ELECTRONIC DATABASE OR RETRIEVAL SYSTEM WITHOUT THE EXPRESS WRITTEN CONSENT OF THE AMERICAN BAR ASSOCIATION.

Artificial Intelligence Litigation Risks in the Employment Discrimination Context


By Gerald L. Maatman, Jr., Alex W. Karasik, and George J. Schaller

Duane Morris Takeaway: Artificial intelligence took the employment world by storm in 2023, quickly becoming one of the most talked about and debated subjects among corporate counsel across the country. Companies will continue to use AI as a resource to enhance decision-making processes for the foreseeable future as these technologies evolve and take shape in a myriad of employment functions. As these processes are fine-tuned, those who seek to harness the power of AI must be aware of the risks associated with its use. This featured article analyzes two novel AI lawsuits and highlights recent governmental guidance related to AI use. As the impact of AI is still developing, companies should recognize the types of claims apt to be brought for use of AI screening tools in the employment context and the implications of possible discriminatory conduct stemming from these tools.

In the Spring 2024 issue of the Journal of Emerging Issues in Litigation, Duane Morris partners Jerry Maatman and Alex Karasik and associate George Schaller analyze key developments in litigation and enforcement shaping the impact of artificial intelligence in the workplace and its subsequent legal risks. Read the full featured article here.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress