Artificial Intelligence, the Copyright Act and Animal Law

On March 18. 2025, the U.S. Court of Appeals for the D.C. Circuit affirmed a district court ruling that a work created with artificial intelligence (AI) using a machine cannot be registered in the name of the machine itself because the Copyright Act requires that a copyright owner be a human being.  Thaler v. Perlmutter, No. 23-5233 (D.C. Cir. Mar. 18, 2025).

In fact, the D.C. Circuit made a specific connection to animal law by citing the decision in Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018), in which the Ninth Circuit held that a monkey cannot be an “author” under the Copyright Act.  And, like Thaler, animal rights groups have tried to base their arguments on dictionary definitions.  Read more on the Animal Law Developments Blog.

AI Suit Illustrates Challenges for Protecting Proprietary Information

Duane Morris partner Agatha Liu is quoted in the Bloomberg Law article, “Trade Secrets Law Is Awkward Fit in AI Prompt-Hacking Lawsuit,” about a medical AI company’s novel trade secrets lawsuit that illustrates the challenges artificial intelligence presents for protecting proprietary information.

Liu said hacking AI to reveal its prompts is “not a good thing, but it’s not terribly illegal.” AI developers most likely will have to stay on top of the best practices to craft their products to save them from themselves she said.

“If you want to reduce risk, you need to up the ante and make your system more resilient and context-aware,” Liu said.

To read the full article, visit the Bloomberg Law website.

Artificial Intelligence Tools and Copyright Infringement Issues During the Training Process

Duane Morris attorneys Jennifer LantzJeremy Elman and Max DiBaise authored the Bloomberg Law article, “Generative AI Training Case Flags Competition as Major Factor,” exploring what the Thomson Reuters v. Ross Intelligence decision’s novel application of the “fair use” defense of copyright law means for generative AI training.

Companies must be mindful of the ultimate purpose of new artificial intelligence tools to avoid running into copyright infringement issues during the training process. If widely adopted, the Thomson Reuters v. Ross Intelligence decision suggests “intermediate copying” cases are unlikely to provide a strong defense when the final output of a tool mirrors the products it was trained on. Accordingly, the key question is likely to what extent the AI system is competing with the underlying copyrighted work. The further away the system is, the more likely it is to be protected under the fair-use doctrine. Read the full article on the Bloomberg Law website.

FDA Draft AI Guidance Marks a New Era for Biotech, Diagnostics and Regulatory Compliance

The U.S. Food and Drug Administration’s recent release of two draft guidance documents on the use of artificial intelligence in drug development, biologics and medical devices has sparked both excitement and skepticism. As AI increasingly permeates these fields, the regulatory landscape is just beginning to take shape—and these proposed guidelines take a step in that direction by raising awareness of important questions about the future of AI innovation in life sciences. For therapeutic, medical device and diagnostics companies—whether already implementing AI or just beginning to explore its potential—the message is clear: The landscape is evolving, and future success will require thoughtful consideration of compliance, patient safety and privacy protection from the earliest stages of AI adoption.

Read the full Alert on the Duane Morris LLP website.

Data Privacy and Consumer Protections in 2025

Duane Morris partner Michelle Hon Donovan shares insight with NBC News about the privacy laws that take effect this year.

Eight states will have privacy laws take effect this year: Delaware, Iowa, Nebraska, New Hampshire, New Jersey, Maryland, Minnesota and Tennessee. The laws impose stricter obligations on businesses handling personal data and grant consumers the right to more transparency on how their data is collected, used and shared, according to Donovan. Not all companies will be required to comply, as each state has its own requirements and thresholds, such as Nebraska, which exempts small businesses.

Donovan said that before 2020, there were few laws across the country addressing privacy except for online privacy laws in a handful of states. Federal laws mostly focus on certain industries, she added, like the Family Educational Rights and Privacy Act and the Health Insurance Portability and Accountability Act.

Read the full article on the NBC News website.

New Law on Generative AI in Healthcare

California has passed a new AI law, Assembly Bill No. 3030, which establishes disclaimer requirements for healthcare providers sending unvetted messages to patients generated by artificial intelligence. AB 3030 is effective January 1, 2025. Under the new law, when a covered provider uses AI to generate a patient communication concerning a patient’s clinical information, that communication must include a disclaimer saying that the communication was generated by AI. Read the full Alert on the Duane Morris website.

Webinar: Artificial Intelligence and Data Licensing

Duane Morris’ Technology Transactions, Licensing and Commercial Contracts Group presents a webinar, Understanding Data Licensing in the World of AI, to be held on Wednesday, December 18, 2024, from 1:00 p.m. to 2:00 p.m. Eastern.

REGISTER

As we head into 2025, more and more of our clients are negotiating data licensing agreements and asking for assistance in understanding a company’s rights regarding data. This webinar will review intellectual property rights with regard to data, the frequent use and terms of creative commons licenses with datasets, and important and commonly negotiated terms in data licensing agreements, with our attorneys providing thoughts on these issues and how they relate to AI.  Learn more.

New York Department of Financial Services Issues Cybersecurity Threat Alert as Malicious Activity Rises

The New York Department of Financial Services (DFS) published an alert directed to all DFS-regulated entities specifically warning of a widespread cybersecurity threat involving social engineering of regulated institutions’ IT help desk personnel and call center personnel.

According to the alert, DFS has detected a trend in which threat actors have targeted IT personnel as a part of schemes to gain system access through password resets and diversion of multi-factor authentication (MFA) to new devices. According to DFS, threat actors have employed tactics including voice-altering technology and leveraging information found online about identities of individuals, in attempts to convince IT personnel at help desks and call centers to comply with fraudulent access requests.

DFS cautions all regulated entities to be on “high alert for suspicious communications” based on the observed threat actors’ recent activity. Entities are encouraged by DFS to:

  • implement secure controls for password changing and  MFA device configurations;
  • exercise caution in authenticating the identity of anyone who tries to change a password or MFA device; and
  • remain vigilant when receiving requests from individuals and vendors regarding system access. 

DFS included a link to guidelines published by the U.S. Department of Homeland Security’s Cybersecurity & Infrastructure Security Agency (CISA). The guidelines from CISA (CISA: Avoiding Social Engineering and Phishing Attacks) identify best practices to protect against these cyber threats, including:

  • Distinctions between common methods of social engineering employed by threat actors
  • Common indicators of malicious activity disguised as a legitimate communication
  • Proactive measures to minimize the risk of disclosing information and/or permitting access to threat actors
  • Guidance and resources on handling a cybersecurity compromise

In addition to the CISA guidelines, NYDFS has a publicly available Cybersecurity Resource Center with more information and guidance for DFS-regulated individuals and entities.

For More Information

If you have any questions about this blog post, please contact Michelle Hon DonovanAriel SeidnerMilagros Astesiano, any of the attorneys in the Privacy and Data Protection Group, or the attorney in the firm with whom you are regularly in contact.

Disclaimer: This blog post has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm’s full disclaimer.

FTC Announces Enforcement Actions Against AI Use

As part of its ongoing enforcement efforts against allegedly deceptive and misleading uses of artificial intelligence, the Federal Trade Commission (FTC) disclosed five new enforcement actions on September 25, 2024, against companies across various industries that either allegedly made fraudulent claims about their AI resources or offered AI services that could be used in misleading or deceptive ways. Read the full Alert on the Duane Morris website.

© 2009-2025 Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress