The AI Update | April 18, 2023

from the Duane Morris Technology, Media & Telecom Group

#HelloWorld. In this edition, momentum picks up in Congress, the executive branch, and the states to regulate AI, while more intellectual-property litigation may be on the horizon. Overseas, governments continue to be wary of the new large AI models. It’s getting complicated. Let’s stay smart together. 

Proposed legislation in the U.S.: Senate Majority Leader Chuck Schumer (D-N.Y.) revealed that his office has met with AI experts to develop a framework for AI legislation for release in the coming weeks. The proposal’s centerpiece would require independent experts to test AI technologies before their public launch and would permit users to access those independent assessments.

This is not the only AI-related legislative effort to have emerged from Congress. Last year, Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.), and Representative Yvette Clarke (D-N.Y.) proposed the Algorithmic Accountability Act of 2022, focused on “automated decision systems” using AI algorithms to make “critical decisions” relating to e.g. education, employment, healthcare, and public benefits. The proposal would require these AI systems to undergo regular “impact assessments,” under the general supervision of the Federal Trade Commission. This bill has not yet emerged from committee.

Proposed state legislation: In parallel with Congress, at least four state legislatures spent the opening months of 2023 on AI legislation of their own. One approach, exemplified by Texas and Pennsylvania, would require registration of AI systems with state governments and would regulate state agencies’ own usage of AI tools. Another camp, typified by California, would regulate private deployment of AI products and echoes the approach of the proposed federal Algorithmic Accountability Act—mandating “impact assessments” to test for bias and discrimination in “critical decisions.” None of these state bills have yet reached a vote.

Agency action in the U.S. In the last edition of The AI Update, we focused on the FTC and its public guidance on the marketing of AI products. This past week, the Commerce Department entered the fray, announcing that it would soon be soliciting public comments on trust and safety testing for AI, what data access might be necessary to effectuate those assessments, and whether required testing should vary by industry. Public comments will be due 60 days after formal publication of the Department’s request.

Efforts outside the U.S.: The story from Europe and Asia remains an uncertain one. Italy made headlines by temporarily banning ChatGPT, but is now reported to have given OpenAI until April 30 to remediate the privacy issues identified. Other European privacy regulators—in Germany, France, Spain, and Ireland—are also reportedly making inquiries, but no definitive action has yet been taken. China, for its part, issued a set of draft rules that would require content generated by AI models to “reflect the core values of socialism” and “not subvert state power,” according to an April 11 report from CNBC.

And don’t forget about civil litigation: Some copyright and related lawsuits against generative AI companies are already on file. OpenAI and its partner Microsoft are facing a class action in the Northern District of California over their GPT-based code generation tool. Image-generator Stability AI is being sued by Getty Images in the District of Delaware and by a group of artists in the Northern District of California.

Those cases remain in the early stages, but the prospect of additional IP litigation against large AI model creators grows with each passing week. Universal Music Group, with a history of enforcing its copyrights against unauthorized music services, has reportedly put Apple and Spotify on notice that they should block AI companies’ scraping of music and lyrics files from their services. In the same vein, media mogul Barry Diller called on publishers and the creative industry to pursue litigation to prevent AI developers like OpenAI from scraping and using “free media” to train their models.  And Drake seems ready to sue . . . someone for an AI-generated cover seemingly done in his voice.

What we’re reading and following: Let’s end this issue on a lighter note, with a recently published study from a Stanford and Google research team that put 25 “generative agents” (software programs simulating individual human behavior) in a digital town based on the popular Sims video game. Over two days, one of the agents planned a Valentine’s Day party and another began a campaign for mayor. Observers found their interactions to be more human than humans.  None of the agents appears to have started plans for world domination.

What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.

Editor-in-Chief: Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

Subscribe to the mailing list to receive future issues.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress