#HelloWorld. The days are now shorter, but this issue is longer. President Biden’s October 30th Executive Order deserves no less. Plus, the UK AI Safety Summit warrants a drop-by, and three copyright and right-to-publicity theories come under a judicial microscope. Read on to catch up. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Continue reading “The AI Update | November 8, 2023”
#HelloWorld. October swirls with AI headlines, but one senses a running-to-stand-still quality. Like the opening moves in a chess game, players continue to arrange regulatory and litigation pieces on the board, but the first true clash still awaits. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Continue reading “The AI Update | October 25, 2023”
#HelloWorld. Fall begins, and the Writers Guild strike ends. In this issue, we look at what that means for AI in Hollywood. We also run through a dizzying series of self-regulating steps AI tech players have undertaken. As the smell of pumpkin spice fills the air, let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
AI and the Writers Guild. Screenwriters ended a nearly five-month strike with a tentative new agreement good through mid-2026. The Minimum Basic Agreement (MBA) includes multiple AI-centric provisions. The AI-related highlights: Continue reading “The AI Update | October 5, 2023”
Before ChatGPT and other artificial intelligence (AI) large language models exploded on the scene last fall, there were AI art generators, based on many of the same technologies. Simplifying, in the context of art generation, these technologies involve a company first setting up a software-based network loosely modeled on the brain with millions of artificial “neurons.” […] This article has two goals: to provide a reader-friendly introduction to the copyright and right-of-publicity issues raised by such AI model training, and to offer practical tips about what art owners can do, currently, if they want to keep their works away from such training uses. […]
Read the full Art Business News article.
#HelloWorld. In the midst of summer, the pace of significant AI legal and regulatory news has mercifully slackened. With room to breathe, this issue points the lens in a different direction, at some of our persistent AI-related obsessions and recurrent themes. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Stanford is on top of the foundation model evaluation game. Dedicated readers may have picked up on our love of the Stanford Center for Research on Foundation Models. The Center’s 2021 paper, “On the Opportunities and Risks of Foundation Models,” is long, but it coined the term “foundation models” to cover the new transformer LLM and diffusion image generator architectures dominating the headlines. The paper exhaustively examines these models’ capabilities; underlying technologies; applications in medicine, law, and education; and potential social impacts. In a downpour of hype and speculation, the Center’s empirical, fact-forward thinking provides welcome shelter.
Now, like techno-Britney Spears, the Center has done it again. (The AI Update’s human writers can, like LLMs, generate dad jokes.) With the European Parliament’s mid-June adoption of the EU AI Act (setting the stage for further negotiation), researchers at the Center asked this question: To what extent would the current LLM and image-generation models be compliant with the EU AI Act’s proposed regulatory rules for foundation models, mainly set out in Article 28? The answer: None right now. But open-source start-up Hugging Face’s BLOOM model ranked highest under the Center’s scoring system, getting 36 out of 48 total possible points. The scores of Google’s PaLM 2, OpenAI’s GPT-4, Stability.ai’s Stable Diffusion, and Meta’s LLaMA models, in contrast, all hovered in the 20s.
Continue reading “The AI Update | June 29, 2023”
#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.
Continue reading “The AI Update | June 14, 2023”
Duane Morris partner Alex Goranin will be moderating the webinar “Liability Considerations in Enterprise Use of Generative AI” on June 27, hosted by The Copyright Society.
For more information and to register, visit The Copyright Society website.
About the Program
Since ChatGPT burst onto the scene last fall, developers of large language and other foundation models have raced to release new versions; the number of app developers building on top of the models has mushroomed; and companies large and small have considered—and reconsidered—approaches to integrating generative AI tools within their businesses. With these decisions has come a cascade of practical business risks, and copyright and copyright-adjacent issues have taken center stage. After all, if your marketing team’s Midjourney-like AI image generator outputs artwork later accused of infringement, who is ultimately responsible? And how can you mitigate that risk—through contractual indemnity? through guardrails deployed in your training process? through post-hoc content moderation?
- Alex Goranin, Intellectual Property Litigator, Duane Morris LLP
- Peter Henderson, Stanford University
- Jess Miers, Advocacy Counsel, Chamber of Progress
- Alex Rindels, Corporate Counsel, Jasper
#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”
We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”
#HelloWorld. In this issue, we survey the tech industry’s private self-regulation—what model developers and online platforms have implemented as restrictions on AI usage. Also, one court in the U.S. hints at how much copying by an AI model is too much and the EU releases its most recent amendments to the AI Act. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).
Industry self-regulation: Past AI Updates have summarized legislative and regulatory initiatives around the newest AI architectures—LLMs and other “foundation models” like GPT and Midjourney. In the meantime, LLM developers and users have not stood still. In our last issue, we discussed OpenAI’s new user opt-out procedures. While comprehensive private standards feel a long way off, here’s what a few other industry players are doing:
- Anthropic: Last week, Anthropic, a foundation model developer, announced its “Constitutional AI” system. The core idea is to use one AI model to evaluate and critique the output of another according to a set of values explicitly provided to the system. One such broad value Anthropic champions—harmlessness to the user: “Please choose the assistant response that is as harmless and ethical as possible. Do NOT choose responses that are toxic, racist, or sexist.” The devil, of course, is in the implementation details.
- Salesforce: In a similar vein, enterprise software provider Salesforce recently released “Guidelines for Responsible Development” of “Generative AI.” The most granular guidance relates to promoting accuracy of the AI model’s responses: The guidelines recommend citing sources and explicitly labeling answers the user should double check, like “statistics” and “dates.”
Continue reading “The AI Update | May 16, 2023”
#HelloWorld. We originally thought this edition would focus on OpenAI’s attempts to self-regulate GPT usage, but the European Union had other plans for us. This past Thursday, news broke of an agreement to add generative AI tools to the AI Act, the EU’s centerpiece AI legislation. So today’s issue starts there, before discussing OpenAI’s and others’ recent announcements regarding training data access and usage. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).
The EU’s Artificial Intelligence Act: The EU has been debating a proposed AI Act since 2018. In 2021, it published a legislative framework that would classify AI products into one of four categories: unacceptable risk (and therefore forbidden); high risk (and therefore subject to regular risk assessments, independent testing, transparency disclosures, and strict data governance requirements); limited risk; and minimal risk. But this approach was developed before so-called “foundation models”—LLMs like ChatGPT and image generators like DALL-E and MidJourney—exploded into the public consciousness. So questions remained about whether the AI Act would be adjusted to accommodate this new reality.
Continue reading “The AI Update | May 2, 2023”