Can AI Creations Be Copyrighted? Supreme Court Could Decide

By Mark Lerner

Following a refusal to grant a copyright registration to Stephen Thaler for a work whose sole author was identified as “Creativity Machine,” a generative AI Thaler created, the D.C. Circuit affirmed that works authored exclusively by artificial intelligence are ineligible for copyright protection under the Copyright Act, which the court read to require human authorship, in keeping with the Copyright Office interpretation and prior case law. A petition for certiorari and a supporting amicus brief now ask the U.S. Supreme Court to take up the question of whether the Copyright Act requires human authorship, arguing that the statute’s text, structure and purpose do not categorically impose such a requirement and that existing doctrines leave room for AI to be recognized as the author of protected works.

Read the full Alert on the Duane Morris website.

Calif. Governor Rejects “No Robo Bosses” Act

By Alex. W. Karasik, Brian L. Johnsrud, and George J. Schaller

Duane Morris Takeaways:  On October 13, 2025, California Governor Gavin Newsom, issued a written statement declining to sign Senate Bill 7 – called the “No Robo Bosses” Act (the “Act”).  While the Act aimed to restrict when and how employers could use automated decision-making systems and artificial intelligence, Governor Newsom rejected the proposed legislation in terms of the Act’s broad drafting and unfocused notification requirements.  Governor Newsom’s statement reflects an initial rebuttal to a wave of pending AI regulations as states wrestle with suitable AI guidance.  Given the pro-employee tendencies of Governor Newsom and California regulators generally, this outcome is a mild surprise.  Employers nonetheless should expect continued scrutiny of AI regulations before enactment.

This legislative activity surely sets the stage for what many believe is the next wave of class action litigation.

See more on the Duane Morris Class Action Defense Blog.

Takeaways from R.I.S.E. AI Conference

This week at the University of Notre Dame’s inaugural R.I.S.E. AI Conference in South Bend, Indiana, partner Alex W. Karasik of the Duane Morris Class Action Defense Group was a panelist at the highly anticipated session, “Challenges and Opportunities for Responsible Adoption of AI.”  The conference, which had over 300 attendees from 16 countries, produced excellent dialogues on how cutting-edge technologies can both solve and create problems, including class action litigation.

Read more at the Duane Morris Class Action Defense Blog.

First Consumer-Facing AI Governance Rules Enacted in U.S.

As an important development in U.S. AI regulation, California enacted its automated decisionmaking technology (ADMT) rules in September 2025. These are the first enacted, broadly scoped, consumer-facing AI governance rules in the country. They offer opt-out rights and logic disclosures for AI-driven significant decisions affecting consumers. The rules took effect on October 1, 2025, with compliance required by January 1, 2027, for covered businesses that use ADMT in significant decisions before that date. Read the full Alert on the Duane Morris website.

Updated Artificial Intelligence Regulations for California Employers

With artificial intelligence developing at breakneck speed, California employment regulations are following right behind. Updated regulations issued by the California Civil Rights Council address the use of artificial intelligence, machine learning, algorithms, statistics and other automated-decision systems (ADS) used to make employment-based decisions. The updated rules, which took effect October 1, 2025, amend existing regulations, Cal. Code Regs., tit. 2, and are designed to protect against potential employment discrimination. The regulations apply to all employers with at least five employees working anywhere and at least one located within California. Read the full Alert on the Duane Morris website.

Artificial Intelligence Errors for Construction Contractors

In a recent Commercial Construction Renovation article, Duane Morris attorneys Robert H. Bell and Michael Ferri write:

Artificial intelligence (“AI”) is rapidly making its way into the construction bidding process. Contractors now use AI-powered estimating software to perform quantity takeoffs and analyze costs with unprecedented speed. According to the drafting and engineering software giant Autodesk, estimating teams are increasingly using AI and automation, particularly for quantity takeoffs, cost forecasting, and speeding up bid creation. Yet as digital tools become routine, legal rules governing bids still rely on traditional principles. This raises a pressing question: if an AI tool makes a costly error in a bid, will the legal system treat that mistake any differently than a human error? Courts are only beginning to grapple with AI-related mishaps, but early indications suggest AI errors will be handled much like any other bidding mistake. In other words, contractors will likely be held responsible for errors made by their AI tools, just as they are responsible for the mistakes of human estimators or means and methods under their control.

“Responsible Use of AI in Healthcare” Guidance

On September 17, 2025, the Joint Commission and Coalition for Health AI issued a joint guidance document entitled “Responsible Use of AI in Healthcare” to help providers implement AI while mitigating the risks of its use. The guidance provides seven elements that constitute responsible AI use in healthcare and discusses how provider organizations can implement them. Read the full Alert on the Duane Morris website.

Managing Compliance Challenges of Artificial Intelligence Pricing Tools

Duane Morris special counsel Justin Donoho authored the Journal of Robotics, Artificial Intelligence & Law article, “Ten Design Guidelines to Mitigate the Risk of AI Pricing Tool Noncompliance with the Federal Trade Commission Act, Sherman Act, and Colorado AI Act.” The article is available here and is a must-read for corporate counsel involved with development or deployment of AI pricing tools.

Northern District of California Allows CIPA Claims Against AI Pizza Ordering Assistant to Proceed

On August 11, 2025, Judge Susan Illston of the Northern District of California denied a motion to dismiss in Taylor v. ConverseNow Technologies, Inc. (Case No. 25-cv-00990-SI), allowing claims under California’s Invasion of Privacy Act (CIPA) Sections 631 and 632 to move forward against an AI voice assistant provider. ConverseNow provides artificial intelligence voice assistant technology that restaurants, including Domino’s, use to answer phone calls, process orders and capture customer information. The plaintiff alleged that when she placed a pizza order by phone, her call was intercepted and routed through ConverseNow’s servers, where her name, address and credit card details were recorded without her knowledge or consent. Read the full Alert on the Duane Morris website.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress