The AI Update | August 29, 2023

#HelloWorld. In this issue, ChatGPT cannot be the next John Grisham, the secret is out on The New York Times’ frustrations with generative AI, and YouTube looks to a technological fix for voice replicas. Summer may soon be over, but AI issues are not going anywhere. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

AI cannot be a copyright author—for now. In one of the most-awaited copyright events of the summer (not Barbie-related), the federal district court in D.C. held that an AI system could not be deemed the author of a synthetically-generated artwork. This was a test case brought by Stephen Thaler, a computer scientist and passionate advocate for treating AI as both copyright author and patent inventor, notwithstanding its silicon- and software-based essence. The D.C. district court, however, held firm to the policy position taken by the U.S. Copyright Office—copyright protects humans alone. In the words of the court: “human authorship is an essential part of a valid copyright claim.” Those who have followed Thaler’s efforts will remember that, about a year ago, the Federal Circuit similarly rejected Thaler’s attempt to list an AI model as an “inventor” on a patent application, holding instead that an inventor must be a “natural person.”

But let’s be cautious in drawing broader conclusions. Because Thaler brought his copyright suit (like his patent one) as a policy-motivated test case, he designed his copyright registration claim to push the envelope. Thaler’s application presented the AI system as the only entity involved in creating the final artwork; he claimed the work “was autonomously generated by AI.” But the court left open the question of whether copyright protection might attach when a human serves as a “guiding human hand” during the art-generation process. Think of someone iterating through dozens of prompts to get the AI to generate just the right image, or using the AI to create a “first draft” digital file, which he or she then further manipulates through image-editing software—like photographers do today in Photoshop or Lightroom. These more complex cases present far more interesting, nuanced copyright authorship issues, likely to be litigated soon in a courtroom near you.

Will the New York Times make litigation news of its own? Speaking of copyright litigation, our end-of-July issue observed that “big names portend big lawsuits.” The chatter in August has been about the New York Times. First, the Semafor newsletter reported that the Times had dropped out of coalition negotiating with major tech companies to license use of news stories and other content for AI training. Shortly after, NPR received tips from anonymous sources that negotiations between the Times and OpenAI were “tense” and “have become so contentious that the paper is now considering legal action.” And just last week, the Times reportedly blocked OpenAI from scraping its website for news stories and earlier in August updated its terms of service—see paragraph 4.1(3)—to prohibit using Times content for “training a machine learning or artificial intelligence (AI) system.” Reading tea leaves, this fall may bring us the biggest litigation about generative AI training yet.

YouTube and digital AI voice replicas. On the music side, YouTube published a blog post on August 21 announcing that it was “partnering with the music industry on AI technology.” The centerpiece of YouTube’s plan appears to be an expansion of its Content ID system, which flags unauthorized uses of songs in YouTube videos and gives copyright owners the option of having the offending video blocked or, alternatively, monetized for the benefit of the copyright owner. With generative AI, users can now mimic an artist’s voice in an entirely new song—so no copyright infringement issue, because voices aren’t traditionally covered by copyright. YouTube’s announcement implies that Content ID will expand to detect and optionally block digital AI voice replicas as well. This is a laudable goal in theory, although critics are already concerned that, in practice, the system will flag not only synthesized voices (“Deepfake Drake”) but also “a kid just trying to rap like Drake.” Obviously, the devil is in the implementation details.

What we’re reading: More sobering than light-hearted, a recent opinion poll of 1,001 voters put out by the Artificial Intelligence Policy Institute, a new think tank focused on AI regulation, reflects considerable mainstream skepticism of AI. Reportedly, 72% of those polled would prefer to slow down AI development; 62% are primarily concerned about AI while only 21% are primarily excited about it; and 82% lack trust in tech executives to regulate AI. Looks like not everyone is buying picks and shovels for an AI gold rush.

What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.

Editor-in-Chief: Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress