#HelloWorld. It’s been a long summer hiatus. Please bear with us as we play our way back into shape. In this issue, we recap the summer highlights of AI legal and regulatory developments. Of course, the EU AI Act, but not just the EU AI Act. California continues to enact headline-grabbing AI legislation, content owners continue to ink deals with AI model developers, and what would a month in the federal courts be without another AI lawsuit or two. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The EU AI Act. Finally. On August 1, 2024, the EU AI Act, originally proposed in April 2021, officially entered into force. This regulation is so extensive and so multifaceted that trying to recap it here would be futile. But there are two key upcoming dates to keep in mind:
- On February 2, 2025, the Act’s rules regarding “Prohibited AI Practices” (Article 5) go into effect. These apply to AI systems that use “subliminal techniques”; that exploit people’s “vulnerabilities” based on “age, disability or a specific social or economic situation”; that create “social scores” based on “social behavior” or “personality characteristics”; that make predictions about a person’s “criminal activity”; that scrape images “from the internet or CCTV footage” in untargeted fashion; or that undertake “biometric categorisation” or “biometric identification,” except for law enforcement purposes.
- On August 2, 2025, the Act’s rules on “General-Purpose AI Models” come into force (generally, Chapter 5, including Articles 53-54). These regulations were a late addition, spurred by the attention blitz LLMs and foundation models received after ChatGPT’s launch in November 2022. According to the EU’s official announcement, the European Commission has already “launched a consultation on a Code of Practice for providers of general-purpose Artificial Intelligence (GPAI) models.” It “will address critical areas such as transparency, copyright-related rules, and risk management.” The Commission “expects to finalise the Code of Practice by April 2025.”
California SB 1047 has its turn in the sun. In perhaps the most surprising development of the summer, the EU AI Act was eclipsed, at least here in the U.S., by all the talk around California SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” the brainchild of California State Senator Scott Wiener. In broad strokes, the bill would impose “safety and security” obligations (and liability) on “covered model” developers to protect against “critical harms.” These harms are defined as (a) AI-driven “chemical, biological, radiological, or nuclear weapon” creation or use that results in “mass casualties”; (b) one or more AI-powered “cyberattacks on critical infrastructure” that either causes “mass casualties” or at least $500 million in damages; or (c) AI acting “with limited human oversight, intervention, or supervision” to create the same level of human casualty or financial harm.
The bill set off intense debate, nicely recapped in these articles from TechCrunch, VentureBeat, and The Verge. The California legislature did ultimately pass the bill on August 29, and it now awaits Governor Gavin Newsom’s signature, along with around 37 other AI-related bills (more on these in the next AI Update). According to the Los Angeles Times, Governor Newsom has not yet decided whether he will sign SB 1047: “It’s one of those bills that come across your desk infrequently, where it depends on who the last person on the call [was] in terms of how persuasive they are.” Hopefully, AI bots are not on the other end of that phone line.
Content and data deals for AI models keep coming. Since December 2023, it seems like every few months another major publisher or news media company announces a deal allowing AI model providers like OpenAI and Google access to their content for AI model training and output synthesis purposes. To date, OpenAI has signed deals with Axel Springer (December 2023), the Financial Times (April 2024), News Corp. (May 2024) and the Atlantic and Vox Media (also May 2024). Reddit announced a deal with Google back in February. The latest entrant is Conde Nast, publisher of Vanity Fair, The New Yorker, Vogue, and WIRED. According to an August 20 report from The Hill, Condé Nast announced a “multiyear partnership” with OpenAI that will “make up” for lost revenue. Specific financial terms, not surprisingly, are confidential.
And so do the copyright suits. On August 19, another putative class of book authors filed another copyright suit against an AI model developer, in the Northern District of California. The case is Bartz et al. v. Anthropic, assigned to tech-savvy Judge William Alsup. Reinforcing a trend that’s emerged since the start of 2024, the complaint includes only a single count for direct copyright infringement, leaving out other causes of action (DMCA, unjust enrichment, conversion) that courts in other generative AI cases have pruned on motions to dismiss. The core liability theories in Bartz are familiar too from other cases: Anthropic is alleged to have engaged in unauthorized intermediate copying of the authors’ works during training of its Claude LLM model when Anthropic used datasets like “Books3” and “The Pile” that included, in their large corpus, copies of the plaintiffs’ works.
Ten days later, on August 29, four professional voice actors entered the courtroom fray, suing ElevenLabs, a company offering text-to-speech AI synthesis tools, in the District of Delaware. That complaint alleges that ElevenLabs created unauthorized “voice clones” of the actors and asserts claims under the DMCA as well as Texas’s and New York’s right of publicity, right of privacy, and misappropriation of name and likeness laws.
With Bartz and Vacker, we are up to about 30 copyright and copyright-related cases targeting AI developers (depending on how you count consolidated suits). For a weekly detailed summary of developments in these cases, take a look at the website run by Professor Edward Lee of Santa Clara University School of Law, ChatGPTiseatingtheworld.com. The name may be breezy, but the content is on point and comprehensive. It appears that none of these copyright cases will see trial until 2025 at the earliest.
What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.
Editor-in-Chief: Alex Goranin
Deputy Editors: Matt Mousley and Tyler Marandola
If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.