The AI Update | May 16, 2023

#HelloWorld. In this issue, we survey the tech industry’s private self-regulation—what model developers and online platforms have implemented as restrictions on AI usage. Also, one court in the U.S. hints at how much copying by an AI model is too much and the EU releases its most recent amendments to the AI Act. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Industry self-regulation: Past AI Updates have summarized legislative and regulatory initiatives around the newest AI architectures—LLMs and other “foundation models” like GPT and Midjourney. In the meantime, LLM developers and users have not stood still. In our last issue, we discussed OpenAI’s new user opt-out procedures. While comprehensive private standards feel a long way off, here’s what a few other industry players are doing:

    • Anthropic: Last week, Anthropic, a foundation model developer, announced its “Constitutional AI” system. The core idea is to use one AI model to evaluate and critique the output of another according to a set of values explicitly provided to the system. One such broad value Anthropic champions—harmlessness to the user: “Please choose the assistant response that is as harmless and ethical as possible. Do NOT choose responses that are toxic, racist, or sexist.” The devil, of course, is in the implementation details.
    • Salesforce: In a similar vein, enterprise software provider Salesforce recently released “Guidelines for Responsible Development” of “Generative AI.” The most granular guidance relates to promoting accuracy of the AI model’s responses: The guidelines recommend citing sources and explicitly labeling answers the user should double check, like “statistics” and “dates.”

The cut-across theme for many other developers and platforms is restricting misleading and inauthentic AI content—including generated AI output that is held out as human.

    • Meta: Its current terms of use proscribe “Inauthentic Behavior” and “Misinformation.” Meta states that it will remove videos that have been “edited or synthesized” in a misleading way or that use deep fake techniques to create “a video that appears authentic.”
    • Google: Its “Generative AI Prohibited Use Policy” bars users from creating or distributing “content intended to misinform, misrepresent, or mislead”—including misrepresentations that content was human-generated when it was not and output that impersonates an individual without expressly disclosing that fact.
    • TikTok: Its new community guidelines, adopted in March, require all users to label AI-generated output—TikTok calls it “synthetic and manipulated media”—with a tag like “synthetic,” “fake,” or “altered.”

The Northern District of California Gives a Hint: A hot topic in the pending intellectual property lawsuits over generative AI is what level of copying of copyrighted material, at what stage of the AI development process (model training vs. output generation), may cross the line into infringement. The Northern District of California in the Doe 1 v. GitHub class action just issued an opinion that, while focused on procedural issues, left an intriguing clue: Showing that an AI model “about 1% of the time” reproduces as output the copyrighted material it was trained on is enough of a “realistic danger” to permit claims seeking injunctive relief.

Compromise Amendments to the EU’s AI Act: The EU continues to set the pace globally on AI regulation. In our last issue, we summarized the history of the EU’s efforts. The European Parliament has now released 144 pages of “compromise amendments” for further discussion among the three key legislative entities within the EU (the “trilogue”!). It’s a dense document, but the headline is that foundation models like GPT would be regulated on par with “high-risk AI systems.” Want to dive in? Start with revised Articles 28, 28(a), 28(b), and 29, which provide a long list of “responsibilities” to be imposed “along the AI value chain” from model developers through to model deployers.

What we’re reading: How can you imperceptibly mark a piece of digital content as AI-generated? Digital watermarking of image files (essentially, careful manipulation of select pixel values) is well known. But researchers are also developing ways to watermark the generated language output of LLMs like GPT. It’s known as “statistical watermarking.” The core idea is to bias the model’s selection of certain words to keep meaning the same while creating a word-use frequency outside of expected probabilities: Think text using the word “comprehend” instead of “understand” more often than you would anticipate. This approach would work only for longer pieces of generated text, 800 words or more.

What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.

Editor-in-Chief: Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

 If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress