California has passed a new AI law, Assembly Bill No. 3030, which establishes disclaimer requirements for healthcare providers sending unvetted messages to patients generated by artificial intelligence. AB 3030 is effective January 1, 2025. Under the new law, when a covered provider uses AI to generate a patient communication concerning a patient’s clinical information, that communication must include a disclaimer saying that the communication was generated by AI. Read the full Alert on the Duane Morris website.
Webinar: Understanding Data Licensing in the World of AI
Duane Morris’ Technology Transactions, Licensing and Commercial Contracts Group presents a webinar, Understanding Data Licensing in the World of AI, to be held on Wednesday, December 18, 2024, from 1:00 p.m. to 2:00 p.m. Eastern.
As we head into 2025, more and more of our clients are negotiating data licensing agreements and asking for assistance in understanding a company’s rights regarding data. This webinar will review intellectual property rights with regard to data, the frequent use and terms of creative commons licenses with datasets, and important and commonly negotiated terms in data licensing agreements, with our attorneys providing thoughts on these issues and how they relate to AI. Learn more.
FTC Cracks Down on Allegedly Deceptive Artificial Intelligence Schemes
As part of its ongoing enforcement efforts against allegedly deceptive and misleading uses of artificial intelligence, the Federal Trade Commission (FTC) disclosed five new enforcement actions on September 25, 2024, against companies across various industries that either allegedly made fraudulent claims about their AI resources or offered AI services that could be used in misleading or deceptive ways. Read the full Alert on the Duane Morris website.
Further Focus on AI Washing: FTC announces Operation AI Comply
The Federal Trade Commission filed lawsuits against five different companies alleging that those companies either made deceptive claims about AI products and services, or used AI in deceptive ways. The FTC announced that these lawsuits are part of a crackdown on companies allegedly engaging in this behavior called “Operation AI Comply.” AI washing has been a recent focus of federal enforcers. This week’s lawsuits represent another step taken by the FTC furthering its position that there is no AI exception to the law.
AI Art Appeal’s Procedural Flaws Put Broader Ruling in Doubt
Duane Morris partner Agatha Liu is quoted in the Bloomberg Law article, “AI Art Appeal’s Procedural Flaws Put Broader Ruling in Doubt.”
An appeals court panel’s focus on procedural issues in a case involving efforts to copyright AI-generated work left attorneys concerned the judges may sidestep larger questions about how copyright law regards the emerging technology. […]
“The point of copyright protection is it should reward creativity. It should be associated with a human being, not a machine,” said Liu. “But there’s merit in claiming the creator of the machine being an author.”
Read the full article on the Bloomberg Law website.
The AI Update | September 18, 2024
#HelloWorld. It’s been a long summer hiatus. Please bear with us as we play our way back into shape. In this issue, we recap the summer highlights of AI legal and regulatory developments. Of course, the EU AI Act, but not just the EU AI Act. California continues to enact headline-grabbing AI legislation, content owners continue to ink deals with AI model developers, and what would a month in the federal courts be without another AI lawsuit or two. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Continue reading “The AI Update | September 18, 2024”Illinois Employment Legislation Regulates Employer Use of AI
In the span of 10 days in August 2024, Illinois Governor J.B. Pritzker signed into law a series of significant employment legislation, paving the way for a new employment landscape beginning in 2025 and 2026. The new legislation includes adding new requirements for employers utilizing artificial intelligence in their decision-making processes, and imposing liability under the Illinois Human Rights Act if those AI systems create a discriminatory effect.
Read the full Alert on the Duane Morris website.
Adopting Generative AI in Medtech
Given the vast amounts of data available, including raw measurements, diagnostic information, treatment plans, and regulatory guidelines, the biomedical technologies sector stands to gain immensely from artificial intelligence (AI), particularly machine learning (ML).
ML, at its core, learns from training datasets to identify patterns, which can then be applied to new input data to make direct inferences. For instance, if specific body scans frequently result in a particular diagnosis, ML can be used to quickly provide that diagnosis when similar scans are encountered, thus aiding in disease diagnosis.
Read the full article by Duane Morris partner Agatha H. Liu, PhD on the MD+DI website.
Illinois Enacts AI Law Focused on Employment Practices
On August 9, 2024, Illinois enacted its landmark artificial intelligence employment law, HB 3773. This legislation, which amends the Illinois Human Rights Act, endeavors to prevent discriminatory consequences of using AI in employment decision-making processes. This law goes into effect on January 1, 2026. Illinois is one of 34 states that have either enacted or proposed laws regulating the use of artificial intelligence. Read the full Alert on the Duane Morris website.
AI Adoption in the Energy Space
Last year, President Joe Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Since the issuance of the executive order, a lot of attention has been focused on the provision requiring “the head of each agency with relevant regulatory authority over critical infrastructure … to assess potential risks related to the use of AI in critical infrastructure sectors involved, … and to consider ways to mitigate these vulnerabilities.” Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy highlights.
Read The Legal Intelligencer article by Duane Morris partners Phil Cha and Brian H. Pandya.