As the world continues to be enthralled by ChatGPT’s human-like ability to engage in conversations and generate content that is often indistinguishable from what one would expect from professionals such as journalists, authors, professors (and yes, even lawyers), less than six months following its launch in November 2022, OpenAI recently launched ChatGPT-4, which is touted as being more creative and collaborative than its previous version. Not surprisingly, ChatGPT-4 improved its performance on standardized tests, including a simulated bar exam, where ChatGPT-4 achieves a score that falls in the top 10% of test takers, whereas the previous version fell in the bottom 10%. Similarly, ChatGPT-4 now scores between 163 and 170 on the LSAT, which puts it in the 88th percentile, up from scoring in the 40th percentile only a few months ago.
ChatGPT Is Accessible and Easy to Use
ChatGPT is relatively simple to use—all you have to do is type in your request on the ChatGPT website. With its ease of use and open access, the legal industry is grappling with what it all means for lawyers, and how they should be thinking about this technology. What are the benefits and what are the risks and pitfalls? To answer these questions, first it is important to have a basic grasp on how the technology works and what is going on “underneath the hood” when it generates coherent and fluent text on a wide range of topics that often appears indistinguishable from text written by a human.
How Generative AI Works
GPT stands for generative pre-trained transformer, which is a language model trained on a large corpus of data. Through supervised and machine learning, the model analyzes texts and words, with each word having a score. It then chooses a word with a high score and moves on to the next one. Every choice is determined by complex algorithms and huge amounts of data, which allows it to produce text that is both coherent and accurate (most of the time). And unlike traditional AI systems that are designed to recognize patterns and make predictions (e.g., your Netflix and Amazon recommendations), generative AI creates new content in the form of images, text, audio, and more. Estimates indicate that the ChatGPT model was trained on approximately 45 terabytes of text data. Additionally, over 300 billion words have been fed into the system, including data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet. ChatGPT was also trained on examples of back-and-forth human conversation, which helps it make its dialogue sound a lot more human. So not only is it able to tap into vast amounts of vocabulary and information, but also it understands words in context, which helps mimic speech patterns while dispatching an encyclopedic knowledge.
For the over 100 million users who have experimented with ChatGPT, it is easy to be mesmerized by its capabilities and wonder what it means for those white collar professions that many believed were impervious to displacement by AI, particularly lawyers. For instance, when you type in a simple question such as: “What is the statute of limitations for a securities fraud claim?” ChatGPT will respond:
The statute of limitations for a securities fraud claim typically depends on the specific laws and regulations that apply to the claim.
Under federal law, a securities fraud claim must generally be brought within two years after the discovery of the facts giving rise to the claim or within five years after the violation occurred, whichever is earlier. This is set forth in Section 1658(b) of the Securities Exchange Act of 1934.
However, there are certain exceptions to this rule. For example, if the plaintiff can show that the defendant engaged in a fraudulent concealment of the facts giving rise to the claim, the statute of limitations may be extended. Additionally, some states may have their own statutes of limitations for securities fraud claims that differ from the federal law.
It’s important to note that the statute of limitations for securities fraud claims can be complex and may depend on the specific facts of the case.
Practicing attorneys handling securities litigation instinctively know that this response is by no means the end of the inquiry. But having a starting point from which to initiate a more in-depth analysis illustrates the possibilities for integrating the technology into daily legal work. And attorneys who have asked the program to prepare work product like an insurance reservation of rights letter will tell you that they got back a surprisingly workable template from which to start the editing process.
It is crucial to note, however, that although ChatGPT appears to “know” something about securities fraud, unlike a real lawyer, ChatGPT has no idea that the Securities Exchange Act is an important statute enacted to govern the secondary trading of securities (stocks, bonds, and debentures) in this country, nor does it know what a statute of limitations is. But its powerful algorithms can piece together words that give the appearance of knowledge on these subjects. Understanding this about the technology should cause as much excitement as it does trepidation.
Professional Service Providers Have Already Rolled Out Generative AI
Indeed, the technology is so promising that some professional service providers have rolled out generative AI tools built specifically for legal and business work. For instance, Harvey, which is backed by the OpenAI Startup Fund, and is built on OpenAI and Chat GPT technology, is being adopted by some of the largest professional service providers on the globe. According to PwC, it is a platform that uses natural language processing, machine learning and data analytics to generate insights and recommendations based on large volumes of data, delivering richer information that will enable PwC professionals to identify solutions faster.
Despite the obvious promise exhibited by ChatGPT’s current abilities, the technology is still in its infancy. In fact, certain results obtained from ChatGPT are often riddled with errors and, in some cases, outright falsehoods. In one instance, it referenced a non-existent California ethics provision. In situations like these, where generative AI appears to simply make things up, and do so with complete and utter confidence, the tech industry has termed this a “hallucination.” With these risks in mind, professional liability carries are issuing warnings to law firms on the professional responsibility and risk management implications of the technology.
Given the promise of ChatGPT, tempered by the associated risks, law firms and corporate counsel are certain to ask themselves what comes next for this technology. Right now, for things like contracts, policies, and other legal documents that tend to be normative, generative AI’s capabilities in gathering and synthesizing information can do a lot of heavy lifting. Therefore, the legal industry should be on the lookout for emerging technologies, like ChatGPT, that can tackle such low hanging fruit, with the immediate benefit being potential cost savings for law firms and their clients.
Regulators Are Trying to Catch Up
Though AI technology like ChatGPT is developing at an exponential pace, currently, there are no legal statutes and regulations that are binding across the United States. With this in mind, issues such as the potential for algorithmic bias, dissemination of misinformation at scale, and copyright infringement are paramount among business leaders and legislators attempting to figure out how regulation plays a role in the AI revolution.
In October 2022, the U.S. White House Office of Science and Technology Policy (OSTP) launched a “Blueprint for an AI Bill of Rights” in an effort to provide a framework for how government, technology companies, and citizens work together to ensure more accountable AI. According to the White House, the document “is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.” The OSTP has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence: (1) Safe and Effective Systems; (2) Algorithmic Discrimination Protections; (3) Data Privacy; (4) Notice and Explanation; (5) Human Alternatives, Consideration, and Fallback.
Contemporaneous with U.S. efforts, approximately 60 countries are focused on AI initiatives in an effort to design and implement policies promoting the responsible use of AI technology, because many fear that, without adequate governance, AI can do significant harm to individuals and our society. A specific example is the European Union’s Artificial Intelligence Act (EU AI Act), which is due in 2024, and aims to introduce a common regulatory and legal framework for AI across all sectors (except military). Experts monitoring the development of such laws rightly ask whether the EU’s approach to AI will be broadly adopted across the globe, similar to how many of the world’s websites have adopted the EU’s data privacy requirements to ask users for consent to process personal data and use cookies.
The Future Is Today
As government around the world attempt to devise a regulatory regime that keeps pace with AI technology, attorneys who have immediate client problems to solve for, will have to grapple with how to implement this technology into their work streams, while mitigating the associated risks. As every attorney has learned since the availability of Google, even though a well-crafted search can get you close, more work is required to ensure you have the right answer. The same approach applies to ChatGPT, though it may prove more difficult when the answer seems like it came from a lawyer.