#HelloWorld. Spring has sprung. While the EU AI Act receives wall-to-wall coverage in other outlets, this issue highlights recent rules, warnings, and legislative enactments here in the U.S. And it ends on a personal, meditative note from an AI user, worth a read. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
California’s Generative AI Report Addresses Benefits and Risks of AI
By Milagros Astesiano and Ariel Seidner
Following Governor Newsom’s September 2023 Executive Order on Artificial Intelligence, the California’s state administration released a report analyzing the potential benefits and risks surrounding the use of Generative Artificial Intelligence (“GenAI”) within the state government (“Report”). This is the first of many steps called for under the Executive Order. Continue reading “California’s Generative AI Report Addresses Benefits and Risks of AI”
Executive Order on Use of AI
On October 30, 2023, President Biden signed an Executive Order (the “EO”) providing guidance for employers on the emerging utilization of artificial intelligence in the workplace. The EO establishes industry standards for AI security, innovation, and safety across significant employment sectors. Spanning over 100 pages, the robust EO endeavors to set parameters for responsible AI use, seeking to harness AI for good while mitigating risks associated with AI usage.
Read more on the Duane Morris Class Action Defense Blog.
The AI Update | June 14, 2023
#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.
The AI Update | May 31, 2023
#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”
We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”