#HelloWorld. Much to catch up on from February and the first half of March. In this issue, we cover the latest AI activity from Europe, as well as a bevy of guidance and updates from U.S. agencies. Off to the races. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The EU AI Act is upon us. This week, the European Parliament finally approved the long-awaited Artificial Intelligence Act, the final text of which publicly leaked back in December. The Act is expected to come into force by May or early June, after it is proofed and published in the Official Journal of the European Union. However, many of its provisions, including those directed to foundation model LLMs, have staggered dates for taking effect to allow a period for AI developers and deployers to come into compliance. For those interested in delving deeper into the legislation’s language, the Future of Life Institute maintains a useful “AI Act Explorer” web page.
Some subject-matter-specific guidance from U.S. agencies. Meanwhile, several agencies in the U.S. spent February offering formal and informal statements about AI usage in specific business areas. For instance, on February 6, the Centers for Medicare & Medicaid Services within HHS issued FAQs clarifying that while Medicare Advantage providers can use AI algorithms to “assist” in coverage-related decisions, such algorithms cannot “alone” replace consideration “of the individual patient’s medical history, the physician’s recommendations” and “clinical notes” for the individual. CMS’ guidance appears motivated by STAT’s unflattering news coverage last year of the nH Predict algorithm, which to date has triggered two class actions against NaviHealth, the model developer, and Humana, the insurer who deployed it.
The FTC has been making pronouncements of its own. On February 15, in an effort to curb the AI “deep fake” phenomenon, the FTC finalized a rule making it an unfair and deceptive practice to impersonate a government or business. The Commission at the same time proposed extending the rule to prohibit impersonation of individuals. And then two weeks later, as reported in Bloomberg, FTC Chair Lina Khan, speaking at the RemedyFest conference (a name only lawyers could love), announced the Commission was working on “bright line” rules making certain personal data “off limits for model training”—i.e., “sensitive health data, geolocation data and browsing data.” Along these same lines, in a blog posting earlier in the month, the FTC cautioned companies seeking to avail themselves of user data already on hand that relaxing consumer privacy protections “through a surreptitious, retroactive amendment” to terms of service or privacy policies could run afoul of unfair and deceptive practice rules.
What the Copyright Office is up to. Not to be outdone, the Copyright Office on February 23 updated Senate and House subcommittees on the progress of the artificial intelligence study the Office began last fall. Here’s the breakdown, according to the update letter:
-
- This spring, the Office will publish the “first section” of its study, focused “on the use of AI to digitally replicate individuals’ appearances, voices, or other aspects of their personality.”
- In the summer, the Office will issue the “second section,” addressing “copyrightability of works incorporating AI-generated material” including guidance on how to register such works with the Office. (The Office will separately update its Compendium of U.S. Copyright Office Practices with additional instructions.)
- Finally, the Office’s “goal” is to disseminate, by the end of September, the final two sections of the study, directed to two of the thorniest issues of copyright law as applied to AI: “the legal implications of training AI models on copyrighted works” and “the allocation of potential liability for AI-generated outputs that may infringe.” These, of course, are the same issues confronting generative AI developers in the close-to-twenty copyright litigations filed against them since last fall. (The website ChatGPT is Eating the World has a nice, generally up-to-date tracker of these cases.)
California prunes claims (pun slightly intended). Speaking of the copyright wars being waged in federal court, some early decisions—from the motion-to-dismiss stage—have started to arrive. Back in November, we covered the highlights of an early Northern District of California opinion in Andersen v. Stability AI. More recently, on February 12, the Northern District spoke again, this time in Tremblay v. OpenAI and Silverman v. OpenAI, currently consolidated together. In a February 12 order, Judge Araceli Martinez-Olguin reaffirmed that synthesized AI output had to be “substantially similar” to the copyrighted work at issue for copyright liability to attach, rejecting the plaintiffs’ highly aggressive theory seeking to cast all AI outputs as infringing “derivative works.” The judge also:
-
- dismissed the DMCA claims because the plaintiffs had not pled enough facts to suggest that OpenAI had deliberately removed copyright management information (like copyright notices and metadata) during the model training process;
- dismissed the negligence claims on the ground that OpenAI owed no discernable legal “duty to safeguard Plaintiffs’ works”; and
- dismissed the unjust enrichment claim because the plaintiffs had not pled facts supporting a claim that OpenAI’s benefit from training on the plaintiffs’ works was secured “through mistake, fraud, coercion, or request.”
What should we be following? Synthetic content is proliferating on Amazon, social media, and on the web generally. But sometimes there are surefire signs the content you’re dealing with has no human behind it, as recounted in a Washington Post piece from earlier this year. Are you reading one of the phrases that follow? You just might be dealing with an LLM. “I’m sorry, but I cannot complete this task.” “I’m sorry, but I can’t provide the requested response.” “As an AI language model….” (If you stumble upon one of these expressions while perusing The AI Update, please send help. Fast!)
Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.
Editor-in-Chief: Alex Goranin
Deputy Editors: Matt Mousley and Tyler Marandola
If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.