#HelloWorld. In this issue, the Copyright Office asks all the right questions—but will it do something interesting with the answers? Microsoft and Adobe offer clever ideas of their own. And, surprise (not really): Two new lawsuits against AI developers. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Copyright Office has questions. Since the spring, the U.S. Copyright Office has devoted considerable effort to its AI Initiative, launching an AI webpage, holding four public listening sessions, and hosting educational webinars. In what it calls a “critical next step,” the Office on August 30 published a notice of inquiry asking for written comments (due October 18) on around 66 wide-ranging AI-related questions. The inquiries are as comprehensive as a law school syllabus and include important subjects like:
- Descriptions of the ways developers collect, curate, and store datasets used to train AI models;
- Potential licensing regimes for compensating creators whose works are used in AI model training;
- Whether it’s technologically possible or economically feasible for an AI model to “unlearn” data it was trained on;
- Whether it’s possible or feasible to determine the extent to which an AI output was influenced by a specific piece of training data;
- The level of specificity and transparency AI developers and deployers should provide about their training data;
- Whether human authorship should be required for copyright protection;
- Whether substantial similarity is the proper test for determining whether an AI output infringes; and
- Labeling requirements (if any) for AI-generated material.
According to the notice, this information will be used to help inform the Office’s stance on AI-related legislation and regulation. That’s not the clearest explanation of intent, but one thing is certain: The comments should be excellent sources for snapshots of the state of generative AI and IP as of fall 2023. Organizing industry and public comments into user-friendly reports and collections is one of the Copyright Office’s superpowers.
Adobe and Microsoft have answers. While the Copyright Office went broad, two well-known tech companies tried something more narrow. First up, Microsoft, which has invested billions in OpenAI and is implementing generative AI throughout its products under the “Copilot” brand. To help spur adoption, Microsoft announced on September 7 an indemnity, alliteratively named the “Copilot Copyright Commitment.” In Microsoft’s words: “if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit.” Important conditions and carve-outs: Paid versions only, existing filters and guardrails cannot have been disabled, and the customer can’t have tried to generate infringing output (e.g., by providing input that the customer does not have rights to use).
Adobe’s CEO meanwhile published a blog post promoting a new Federal Anti-Impersonation Right (FAIR) Act. This proposal “would provide a right of action”—under federal, not state, law—“to an artist against those that are intentionally and commercially impersonating their work or likeness through AI tools.” Charitably, this proposal aims at filling a gap in the law, where users employ generative AI tools to create output “in the style of” a particular artist without that output necessarily copying enough literal expression to infringe. A smidge more cynically, the proposed act looks to pin principal liability on individual users rather than on model developers like Adobe.
More of the same on the litigation front. Back in late July, we recapped the eight major lawsuits targeting generative AI, mainly filed in the Northern District of California, mainly proposed as class actions, and mainly alleging copyright and privacy violations. Here come three more attempted class actions, all filed in that same court in the first two weeks of September. A group of authors including Michael Chabon, of “Wonder Boys” and “The Amazing Adventures of Kavalier and Clay” fame, sued OpenAI and another major AI developer for alleged copyright and DMCA offenses, while two anonymous plaintiffs charged OpenAI and Microsoft with various privacy-related violations. All three suits largely mirror the structure and theories of the previous California cases. (If you’d like an updated litigation tracker, send us an email.)
What we’re reading: Sometimes all the talk of AI benefits and risks can feel disconnected from practical applications on the ground. Not so in this thoughtful Stanford Technology Law Review student note, which describes in detail the many ways in which states and NGOs are already using AI models to track and evaluate human rights issues—for instance, by automating the process of reviewing and clustering global media news reports to identify problematic death penalty cases. The note reinforces one of The AI Update’s chief mantras: Don’t think of AI as a magic box. Start with a specific use case and then work backwards to see whether an AI tool can add efficiency to the process.
What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.
Editor-in-Chief: Alex Goranin
If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.