ChatGPT And The Legal Profession

As the world continues to be enthralled by ChatGPT’s human-like ability to engage in conversations and generate content that is often indistinguishable from what one would expect from professionals such as journalists, authors, professors (and yes, even lawyers), less than six months following its launch in November 2022, OpenAI recently launched ChatGPT-4, which is touted as being more creative and collaborative than its previous version.  Not surprisingly, ChatGPT-4 improved its performance on standardized tests, including a simulated bar exam, where ChatGPT-4 achieves a score that falls in the top 10% of test takers, whereas the previous version fell in the bottom 10%.  Similarly, ChatGPT-4 now scores between 163 and 170 on the LSAT, which puts it in the 88th percentile, up from scoring in the 40th percentile only a few months ago.

ChatGPT Is Accessible and Easy to Use

ChatGPT is relatively simple to use—all you have to do is type in your request on the ChatGPT website. With its ease of use and open access, the legal industry is grappling with what it all means for lawyers, and how they should be thinking about this technology.  What are the benefits and what are the risks and pitfalls?  To answer these questions, first it is important to have a basic grasp on how the technology works and what is going on “underneath the hood” when it generates coherent and fluent text on a wide range of topics that often appears indistinguishable from text written by a human.

How Generative AI Works

GPT stands for generative pre-trained transformer, which is a language model trained on a large corpus of data.  Through supervised and machine learning, the model analyzes texts and words, with each word having a score.  It then chooses a word with a high score and moves on to the next one. Every choice is determined by complex algorithms and huge amounts of data, which allows it to produce text that is both coherent and accurate (most of the time).  And unlike traditional AI systems that are designed to recognize patterns and make predictions (e.g., your Netflix and Amazon recommendations), generative AI creates new content in the form of images, text, audio, and more.  Estimates indicate that the ChatGPT model was trained on approximately 45 terabytes of text data.  Additionally, over 300 billion words have been fed into the system, including data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet.  ChatGPT was also trained on examples of back-and-forth human conversation, which helps it make its dialogue sound a lot more human.  So not only is it able to tap into vast amounts of vocabulary and information, but also it understands words in context, which helps mimic speech patterns while dispatching an encyclopedic knowledge.

For the over 100 million users who have experimented with ChatGPT, it is easy to be mesmerized by its capabilities and wonder what it means for those white collar professions that many believed were impervious to displacement by AI, particularly lawyers.  For instance, when you type in a simple question such as: “What is the statute of limitations for a securities fraud claim?” ChatGPT will respond:

The statute of limitations for a securities fraud claim typically depends on the specific laws and regulations that apply to the claim.

Under federal law, a securities fraud claim must generally be brought within two years after the discovery of the facts giving rise to the claim or within five years after the violation occurred, whichever is earlier. This is set forth in Section 1658(b) of the Securities Exchange Act of 1934.

However, there are certain exceptions to this rule. For example, if the plaintiff can show that the defendant engaged in a fraudulent concealment of the facts giving rise to the claim, the statute of limitations may be extended. Additionally, some states may have their own statutes of limitations for securities fraud claims that differ from the federal law.

It’s important to note that the statute of limitations for securities fraud claims can be complex and may depend on the specific facts of the case.

Practicing attorneys handling securities litigation instinctively know that this response is by no means the end of the inquiry.  But having a starting point from which to initiate a more in-depth analysis illustrates the possibilities for integrating the technology into daily legal work.  And attorneys who have asked the program to prepare work product like an insurance reservation of rights letter will tell you that they got back a surprisingly workable template from which to start the editing process.

It is crucial to note, however, that although ChatGPT appears to “know” something about securities fraud, unlike a real lawyer, ChatGPT has no idea that the Securities Exchange Act is an important statute enacted to govern the secondary trading of securities (stocksbonds, and debentures) in this country, nor does it know what a statute of limitations is.  But its powerful algorithms can piece together words that give the appearance of knowledge on these subjects.  Understanding this about the technology should cause as much excitement as it does trepidation.

Professional Service Providers Have Already Rolled Out Generative AI

Indeed, the technology is so promising that some professional service providers have rolled out generative AI tools built specifically for legal and business work.  For instance, Harvey, which is backed by the OpenAI Startup Fund, and is built on OpenAI and Chat GPT technology, is being adopted by some of the largest professional service providers on the globe.  According to PwC, it is a platform that uses natural language processing, machine learning and data analytics to generate insights and recommendations based on large volumes of data, delivering richer information that will enable PwC professionals to identify solutions faster.

Despite the obvious promise exhibited by ChatGPT’s current abilities, the technology is still in its infancy. In fact, certain results obtained from ChatGPT are often riddled with errors and, in some cases, outright falsehoods.  In one instance, it referenced a non-existent California ethics provision.  In situations like these, where generative AI appears to simply make things up, and do so with complete and utter confidence, the tech industry has termed this a “hallucination.” With these risks in mind, professional liability carries are issuing warnings to law firms on the professional responsibility and risk management implications of the technology.

Given the promise of ChatGPT, tempered by the associated risks, law firms and corporate counsel are certain to ask themselves what comes next for this technology.  Right now, for things like contracts, policies, and other legal documents that tend to be normative, generative AI’s capabilities in gathering and synthesizing information can do a lot of heavy lifting. Therefore, the legal industry should be on the lookout for emerging technologies, like ChatGPT, that can tackle such low hanging fruit, with the immediate benefit being potential cost savings for law firms and their clients.

Regulators Are Trying to Catch Up

Though AI technology like ChatGPT is developing at an exponential pace, currently, there are no legal statutes and regulations that are binding across the United States.  With this in mind, issues such as the potential for algorithmic bias, dissemination of misinformation at scale, and copyright infringement are paramount among business leaders and legislators attempting to figure out how regulation plays a role in the AI revolution.

In October 2022, the U.S. White House Office of Science and Technology Policy (OSTP) launched a “Blueprint for an AI Bill of Rights” in an effort to provide a framework for how government, technology companies, and citizens work together to ensure more accountable AI.  According to the White House, the document “is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.”  The OSTP has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence: (1) Safe and Effective Systems; (2) Algorithmic Discrimination Protections; (3) Data Privacy; (4) Notice and Explanation; (5) Human Alternatives, Consideration, and Fallback.

Contemporaneous with U.S. efforts, approximately 60 countries are focused on AI initiatives in an effort to design and implement policies promoting the responsible use of AI technology, because many fear that, without adequate governance, AI can do significant harm to individuals and our society.  A specific example is the European Union’s Artificial Intelligence Act (EU AI Act), which is due in 2024, and aims to introduce a common regulatory and legal framework for AI across all sectors (except military).  Experts monitoring the development of such laws rightly ask whether the EU’s approach to AI will be broadly adopted across the globe, similar to how many of the world’s websites have adopted the EU’s data privacy requirements to ask users for consent to process personal data and use cookies.

The Future Is Today

As government around the world attempt to devise a regulatory regime that keeps pace with AI technology, attorneys who have immediate client problems to solve for, will have to grapple with how to implement this technology into their work streams, while mitigating the associated risks.  As every attorney has learned since the availability of Google, even though a well-crafted search can get you close, more work is required to ensure you have the right answer.  The same approach applies to ChatGPT, though it may prove more difficult when the answer seems like it came from a lawyer.


Proposed New Rule 16.1 Encourages Early Assessment of Discovery Issues in MDLs

On March 28, 2023, the federal judiciary’s Advisory Committee on Civil Rules voted in favor of publishing draft Rule 16.1 regarding initial case management in multidistrict litigation (MDL) for public comment.

Although the proposed rule does not require a transferee court to hold an initial management conference, it encourages the transferee court to do so in order to “develop a management plan for orderly pretrial activity in the MDL proceedings.”  While most courts already hold one or more management conferences at an early stage of MDL proceedings, the Draft Committee Note to the proposed rule emphasizes the need to formalize “a framework for the initial management of MDL proceedings” given that MDLs account for a large percentage of the federal civil docket.  

The proposed rule encourages the court to provide litigants with a set of issues that they must address at the initial conference and outlines a non-exhaustive list of twelve topics for courts to consider. Four of these topics have the potential to impact the parties’ discovery strategy: Continue reading “Proposed New Rule 16.1 Encourages Early Assessment of Discovery Issues in MDLs”

Court Interprets China’s PIPL As Containing Exceptions for Discovery in Cryptocurrency Class Action 

By Jessica Priselac

Duane Morris Takeaway: While very few courts have been faced with interpreting China’s Personal Information Protection Law (“PIPL”), two judges have now held that there is no conflict between China’s PIPL and U.S. law with respect to a litigant’s compliance with discovery obligations in the U.S.  Although these two initial decisions are not binding on other courts, the dearth of case law on this issue makes it likely that both decisions will influence future interpretations of PIPL in U.S. litigation.

China’s Personal Information Protection Law (“PIPL”) came into effect on November 1, 2021, and is the country’s first comprehensive statute regulating the protection of personal information.  While the legislation has generated significant attention from the media and companies seeking to ensure compliance with the law in their day to day operations, the statute also has important implications for discovery in U.S. litigation.  A recent decision in Owen v. Elastos Foundation by Magistrate Judge Barbara Moses of the Southern District of New York provides insight into how U.S. courts may interpret PIPL in the context of discovery disputes.

Owen is a putative class action in which plaintiffs claim that defendant Elastos Foundation failed to register the cryptocurrency it created and sold in the United States.  Although defendants initially agreed to collect and produce data from 19 custodians, defendants later informed plaintiffs that PIPL prevented the collection and production of data belonging to certain custodians because it contained the personal information of individuals located in China.  Plaintiffs then filed a motion to compel the production of documents that had not been produced by defendants on the basis of PIPL.

As outlined by the court, PIPL prohibits accessing or handling the personal information of individuals located in China.  Exceptions to this prohibition include obtaining the individual’s consent, as well as certain other exceptions outlined in Article 13 of the statute.  In support of their respective positions, the parties submitted competing expert reports as to the correct interpretation of PIPL, including whether any of the statute’s exceptions applied in the context of the parties’ discovery dispute.

With respect to the first category of data at issue, which was located outside of China but contained personal information related to individuals located in China, Judge Moses held that PIPL did not apply.  In reaching this conclusion, she interpreted the statute as applying to data located outside of China only under certain circumstances set forth in Article 3 of PIPL, such as when data is being handled or processed to track or analyze the behavior of persons located in China for marketing purposes.  She held that none of those enumerated circumstances applied in the discovery context, and compelled the production of data located outside of China.

As to the second category of data, all of which was stored inside of China, Judge Moses turned to the text of the statute, and held that processing and producing data located in China for the purpose of complying with U.S. law does not conflict with PIPL.  Specifically, she relied on Sections 3 and 7 of Article 13 of PIPL, which permit the processing of personal information if “the processing is necessary for the performance of statutory duties or obligations,” or under “other circumstances provided bylaws or administrative regulations.”  PIPL Art. 13, §§ 3, 7.  While defendants argued that these exceptions only applied to compliance with Chinese laws, Judge Moses rejected defendants’ argument and emphasized that the text of the statute did not support defendants’ interpretation.

In relying on exceptions to PIPL that are grounded in compliance with other laws and regulations, Judge Moses cited another opinion interpreting PIPL, Cadence Design Systems, Inc. v. Syntronic AB.  In that case, Chief Magistrate Judge Joseph C. Spero of the Northern District of California similarly relied on Article 13 of PIPL in holding that there was no conflict between U.S. law and PIPL as it pertained to compliance with one of his discovery orders.

As two of the first opinions interpreting PIPL, the opinion of Judge Moses in Owen and that of Judge Spero in Cadence Systems are likely to influence future decisions regarding the application of PIPL in the context of discovery.  Accordingly, those who are inclined to assert PIPL in opposition to a motion to compel must be prepared to grapple with the question of whether it presents a conflict with U.S. law.

Public Comment Period for The Sedona Conference TAR Case Law Primer Ends March 27

By Jessica Priselac and Brandon Spurlock

The Sedona Conference TAR Case Law Primer, Second Edition is set to become a key resource for judges and practitioners on discovery disputes related to technology assisted review (TAR).  The public comment version is now available for download.

The primer is a project of The Sedona Conference Working Group on Electronic Document Retention and Production (Working Group 1), and the new edition addresses issues and case law that have evolved since the primer was first published in January 2017.  The primer covers the shift from TAR 1.0 systems to TAR 2.0, summarizes the current state of the law, identifies key trends, and reviews questions that remain unsettled in the case law.

Public comments will be accepted until March 27, 2023 at

Welcome To The Duane Morris Discovery Strategy And Data Analytics Blog

By Brandon Spurlock and Jessica Priselac 

We are excited to launch this new Duane Morris blog, which will cover all aspects of discovery, including the impact of emerging technologies on the discovery process, the evolving laws and regulations from around the globe that are shaping discovery strategy in U.S. litigation, and recent cases that provide insight into how state and federal courts are grappling with these issues.

And for those of you who know us, it will come as no surprise that we will also be covering discovery topics that arise out of class actions and MDLs in conjunction with our colleagues over at the Duane Morris Class Action Defense Blog.

Happy reading!

Brandon ( and Jessica (

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress