#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.