On August 11, 2025, Judge Susan Illston of the Northern District of California denied a motion to dismiss in Taylor v. ConverseNow Technologies, Inc. (Case No. 25-cv-00990-SI), allowing claims under California’s Invasion of Privacy Act (CIPA) Sections 631 and 632 to move forward against an AI voice assistant provider. ConverseNow provides artificial intelligence voice assistant technology that restaurants, including Domino’s, use to answer phone calls, process orders and capture customer information. The plaintiff alleged that when she placed a pizza order by phone, her call was intercepted and routed through ConverseNow’s servers, where her name, address and credit card details were recorded without her knowledge or consent. Read the full Alert on the Duane Morris website.
Northern District of California Decides AI Training Is Fair Use, but Pirating Books May Still Be Infringing
Two groundbreaking decisions from the Northern District of California—Kadrey v. Meta Platforms, Inc. and Bartz v. Anthropic PBC—shed light on how courts are approaching the use of copyrighted materials in training large language models (LLMs). Both cases involved authors alleging copyright infringement based on the use of their books to train generative AI models, and both courts held that use of the copyrighted materials to train the AI models was transformative. The court in Anthropic held, however, that copying pirated books constitutes copyright infringement and the transformative nature of the use did not rescue such infringement. Conversely, the Meta court held that copying from pirate sites to train AI is fair use, but only because the plaintiffs failed to submit evidence of market harm, which the court believed to be the most relevant factor. As such, while use of copyrighted works to train AI may be fair use, copying works without permission carries the risk of infringement. Read the full Alert on the Duane Morris website.
California Passes Novel Law Governing GenAI in Healthcare
California has passed a new AI law, Assembly Bill No. 3030, which establishes disclaimer requirements for healthcare providers sending unvetted messages to patients generated by artificial intelligence. AB 3030 is effective January 1, 2025. Under the new law, when a covered provider uses AI to generate a patient communication concerning a patient’s clinical information, that communication must include a disclaimer saying that the communication was generated by AI. Read the full Alert on the Duane Morris website.
The AI Update | April 4, 2024
#HelloWorld. Spring has sprung. While the EU AI Act receives wall-to-wall coverage in other outlets, this issue highlights recent rules, warnings, and legislative enactments here in the U.S. And it ends on a personal, meditative note from an AI user, worth a read. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The AI Update | March 14, 2024
#HelloWorld. Much to catch up on from February and the first half of March. In this issue, we cover the latest AI activity from Europe, as well as a bevy of guidance and updates from U.S. agencies. Off to the races. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
California’s Generative AI Report Addresses Benefits and Risks of AI
By Milagros Astesiano and Ariel Seidner
Following Governor Newsom’s September 2023 Executive Order on Artificial Intelligence, the California’s state administration released a report analyzing the potential benefits and risks surrounding the use of Generative Artificial Intelligence (“GenAI”) within the state government (“Report”). This is the first of many steps called for under the Executive Order. Continue reading “California’s Generative AI Report Addresses Benefits and Risks of AI”