#HelloWorld. January has not been especially frantic on the legal-developments-in-AI front. Yes, we know the anticipated final text of the EU AI Act was published unofficially, but the final vote hasn’t happened yet, so we’re biding time for now. Meanwhile, in this issue, we check in with state bar associations, SAG-AFTRA, and the FTC. They have things to say about AI policy too, so we’ll listen. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
#HelloWorld. October swirls with AI headlines, but one senses a running-to-stand-still quality. Like the opening moves in a chess game, players continue to arrange regulatory and litigation pieces on the board, but the first true clash still awaits. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Before ChatGPT and other artificial intelligence (AI) large language models exploded on the scene last fall, there were AI art generators, based on many of the same technologies. Simplifying, in the context of art generation, these technologies involve a company first setting up a software-based network loosely modeled on the brain with millions of artificial “neurons.” […] This article has two goals: to provide a reader-friendly introduction to the copyright and right-of-publicity issues raised by such AI model training, and to offer practical tips about what art owners can do, currently, if they want to keep their works away from such training uses. […]
#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.
Duane Morris partner Alex Goranin will be moderating the webinar “Liability Considerations in Enterprise Use of Generative AI” on June 27, hosted by The Copyright Society.
For more information and to register, visit The Copyright Society website.
About the Program
Since ChatGPT burst onto the scene last fall, developers of large language and other foundation models have raced to release new versions; the number of app developers building on top of the models has mushroomed; and companies large and small have considered—and reconsidered—approaches to integrating generative AI tools within their businesses. With these decisions has come a cascade of practical business risks, and copyright and copyright-adjacent issues have taken center stage. After all, if your marketing team’s Midjourney-like AI image generator outputs artwork later accused of infringement, who is ultimately responsible? And how can you mitigate that risk—through contractual indemnity? through guardrails deployed in your training process? through post-hoc content moderation?
- Alex Goranin, Intellectual Property Litigator, Duane Morris LLP
- Peter Henderson, Stanford University
- Jess Miers, Advocacy Counsel, Chamber of Progress
- Alex Rindels, Corporate Counsel, Jasper
#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”
We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”
from the Duane Morris Technology, Media & Telecom Group
#HelloWorld. Welcome to the first edition of The AI Update. Every other week, we’ll provide you with a curated summary of the most relevant, impactful legal developments in the world of AI. Let’s stay smart together.
Our mission: Since ChatGPT’s public launch last November, the onslaught of AI-related news has been daily and relentless. We’ve guided our clients one-on-one about legal developments, the knowns and unknowns, and what we see coming down the road. So much so that a centralized information exchange—this newsletter—feels like a logical next step. Why every two weeks and why only one page? So as not to continue the flood. What if you want more detail? Contact us individually and we’ll get you up to speed. There’s a lot of noise out there; we try to focus on the signal.
Regulatory activity in the U.S.: For now, two agencies have emerged as the most vocal in the AI space (at least in public). The Copyright Office released guidance on how to seek copyright protection for works created with generative-AI assistance. In short: a human author is required and any AI-tool use must be disclosed and disclaimed. The Office is also holding a series of public listening sessions on this and other related AI topics, like the use of copyrighted works to train AI models. The sessions start on April 19—stay tuned for highlights in future editions of The AI Update.