As part of its ongoing enforcement efforts against allegedly deceptive and misleading uses of artificial intelligence, the Federal Trade Commission (FTC) disclosed five new enforcement actions on September 25, 2024, against companies across various industries that either allegedly made fraudulent claims about their AI resources or offered AI services that could be used in misleading or deceptive ways. Read the full Alert on the Duane Morris website.
How Copyright Law Regards Artificial Intelligence
Duane Morris partner Agatha Liu is quoted in the Bloomberg Law article, “AI Art Appeal’s Procedural Flaws Put Broader Ruling in Doubt.”
An appeals court panel’s focus on procedural issues in a case involving efforts to copyright AI-generated work left attorneys concerned the judges may sidestep larger questions about how copyright law regards the emerging technology. […]
“The point of copyright protection is it should reward creativity. It should be associated with a human being, not a machine,” said Liu. “But there’s merit in claiming the creator of the machine being an author.”
Read the full article on the Bloomberg Law website.
Artificial Intelligence Updates – 09.18.24
#HelloWorld. It’s been a long summer hiatus. Please bear with us as we play our way back into shape. In this issue, we recap the summer highlights of AI legal and regulatory developments. Of course, the EU AI Act, but not just the EU AI Act. California continues to enact headline-grabbing AI legislation, content owners continue to ink deals with AI model developers, and what would a month in the federal courts be without another AI lawsuit or two. Let’s stay smart together.
Read more on The Artificial Intelligence Blog.
Next up for Medtech: Being Generative in Domain-Specific Languages
Given the vast amounts of data available, including raw measurements, diagnostic information, treatment plans, and regulatory guidelines, the biomedical technologies sector stands to gain immensely from artificial intelligence (AI), particularly machine learning (ML).
ML, at its core, learns from training datasets to identify patterns, which can then be applied to new input data to make direct inferences. For instance, if specific body scans frequently result in a particular diagnosis, ML can be used to quickly provide that diagnosis when similar scans are encountered, thus aiding in disease diagnosis.
Read the full article by Duane Morris partner Agatha H. Liu, PhD on the MD+DI website.
Embracing Artificial Intelligence in the Energy Industry
Last year, President Joe Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Since the issuance of the executive order, a lot of attention has been focused on the provision requiring “the head of each agency with relevant regulatory authority over critical infrastructure … to assess potential risks related to the use of AI in critical infrastructure sectors involved, … and to consider ways to mitigate these vulnerabilities.” Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy highlights.
Read The Legal Intelligencer article by Duane Morris partners Phil Cha and Brian H. Pandya.
Suit Involving Artificial Intelligence-Powered Hiring Tools Heads to Discovery
A closely watched discrimination lawsuit over software provider Workday’s artificial intelligence-powered hiring tools is headed into discovery after a California federal court ruled the company may be subject to federal antidiscrimination laws if its products make decisions on candidates. […]
Alex W. Karasik, a management-side attorney who is a partner at Duane Morris LLP and a member of the firm’s workplace class action group, said companies using or selling workplace-related AI tools need to track the Workday proceedings closely.
“This is definitely a case to watch, as it’s a landmark case involving the use of artificial intelligence and the hiring process,” he said. “Both employers and technology vendors, particularly those involved with artificial intelligence or algorithmic decision-making tools, absolutely need to pay attention to this case.”
He said [the] decision sets out critical guidelines for courts’ evaluations of who may be on the hook when a vendor of AI-based hiring tools faces allegations that its product churns out biased results. […]
Read the full article on the Law360 website (subscription may be required).
The Age of Artificial Intelligence and Commercial Transactions
The pervasiveness of artificial intelligence (AI) is transforming the commercial transactions landscape. Providers across industries are looking to utilize third-party AI tools, or utilize customer data to train AI models, in connection with providing services or implementing use cases proposed by their customers to create efficiencies and cost savings. The intellectual property (IP) stakes are heightened, and parties on either side of a transaction will need to carefully leverage agreements to maintain IP rights in their own data, secure IP rights in resulting products, and protect themselves against claims of infringement.
Read the full Landslide article by Duane Morris’ Ariel Seidner. (ABA membership required.)
The Use of Artificial Intelligence Tools Before Pennsylvania Courts
By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten. But in the past year, attorneys have become more attuned to the potential and risks of artificial intelligence (AI). Last June, lawyers in New York made headlines after relying on a chatbot’s research skills, leading to sanctions for unknowingly submitting fictitious caselaw. One journalist even found himself in a love triangle with a chatbot bent on ending his marriage. In spite of these cautionary tales, the use of AI in the legal profession is on the rise as trusted legal research services like LexisNexis and Westlaw roll out AI-assisted research functions and major tech companies integrate AI into their products.
Read The Legal Intelligencer article by Rachel Good on the Duane Morris website.
How AI Tools Can Affect E-Discovery
Artificial intelligence use cases are expanding at a rapid rate, and the pressure is mounting for businesses to leverage that technology or risk being left behind by their competitors. In addition to open-source applications, businesses are using enterprise-specific tools that enable employees to use generative AI technology at work. This includes licensed versions of the open-source models or business-specific tools developed alongside the applications the business is already using.
Read the article by Sarah O’Laughlin Kulik on the Duane Morris website.
Artificial Intelligence in the Courtroom
Earlier this spring, A Washington State Court Judge issued what is widely believed to be the first evidentiary decision regarding Artificial Intelligence. In Washington v. Puloka, following a Frye hearing, the Judge excluded AI-enhanced video from being considered as evidence. Read the full post on the Duane Morris Artificial Intelligence Blog.