#HelloWorld. Fall begins, and the Writers Guild strike ends. In this issue, we look at what that means for AI in Hollywood. We also run through a dizzying series of self-regulating steps AI tech players have undertaken. As the smell of pumpkin spice fills the air, let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
AI and the Writers Guild. Screenwriters ended a nearly five-month strike with a tentative new agreement good through mid-2026. The Minimum Basic Agreement (MBA) includes multiple AI-centric provisions. The AI-related highlights:
-
- Studios cannot use AI to write or rewrite literary material, or to dilute a writer’s credit as the author of material.
- AI-synthesized material cannot serve as “source” material. This means studios cannot ask writers to adapt (at a lower pay scale) works generated by AI in the first instance.
- Studios must disclose whether AI created any part of the material given to a writer.
- Studios cannot mandate AI usage, but writers can opt to employ AI tools if the studio agrees.
One of the mostly debated issues (also at play in many recent copyright litigations) is whether writers’ works can be used to train generative AI models. The MBA saves that fight for another day: Writers reserve the right to “assert that exploitation of writers’ material to train AI is prohibited by” the Minimum Basic Agreement “or other law.”
Industry continues to self-regulate—and pay. One theme emerging from the Writers Guild compromise—disclosure of AI usage—has echoes in the AI industry at large. Social media sensation TikTok, for instance, recently announced a new collection of tools and technologies for labeling AI material posted by creators to the platform. One tool enables creators “to easily inform their community when they post AI-generated content.” Another tool under development would automatically detect content “edited or created with AI” and programmatically affix an “AI-generated” label to it.
Other platforms are moving beyond labeling to compensation. As previously reported, some owners of large (and therefore valuable) datasets have negotiated licensing deals for AI training directly with AI model developers. The separate deals OpenAI made this summer with Shutterstock and the Associated Press are two cases in point.
But what if you’re an individual artist without a huge aggregated portfolio of works to license? Enter Adobe. While the software company maintains that it has the legal right to train its Firefly image-synthesizing AI model on works uploaded to its platform, Adobe has started paying “bonuses” to artists whose images were used in training. The payout is a function of (a) the number of images submitted and (b) the number of times someone licensed those images in the preceding 12 months. Unfortunately, Adobe hasn’t disclosed how big the bonuses paid to date have been, but an educated guess places them closer to the $1 to $100 end of the spectrum than the $1,000 to $10,000 end. Valuation of individual contributions to a training dataset is one of the biggest unsolved issues in this space.
Another red team for OpenAI. Last week OpenAI announced the start of an OpenAI Red Teaming Network. “Red teaming” is not new; it refers to the process of testing systems for weaknesses by actively trying to find vulnerabilities. Cybersecurity defense against malicious hackers is a classic use case. In the generative AI context, think of someone attempting to “trick” an AI model through various prompt variations into giving harmful advice, spreading misinformation, disclosing proprietary data, or leaking private personal information. According to OpenAI’s blog, the company’s Red Teaming Network will have experts recruited from a broad range of subjects involved in “rigorously evaluating” future iterations of OpenAI products. The list of desired domain expertise reads like a college curriculum: cognitive science, biology, computer science, political science, persuasion, anthropology, finance, healthcare, biometrics, psychology, law, chemistry, and on and on.
What we’re reading: Speaking of college reminiscences, we came across an interesting paper from a University of Pennsylvania and Google Research team, asking the question: Can a language model be trained to direct a Dungeons & Dragons game, responding “as the player who runs the game—i.e., the Dungeon Master”? Interestingly, the team’s answer encapsulates the current state of LLMs: Models can generate “evocative, in-character text” to serve as “inspiration” for human Dungeon Masters, but they cannot quite track “the full state” of the game for long enough to enable “fully autonomous” DMs. Humans live to fight another day, no dodecahedral dice roll required…yet.
What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.
Editor-in-Chief: Alex Goranin
Deputy Editors: Matt Mousley and Tyler Marandola
If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.