FTC launches GenAI investigation

The Federal Trade Commission announced today that it has begun an investigation into Generative AI investments and partnerships. The FTC is using its investigative power pursuant to Section 6(b) of the FTC act which allows the FTC to issue compulsory process (similar to a subpoena or Civil Investigative Demand) to learn information about an organization, without a specific law-enforcement purpose. Historically, the FTC has used its 6(b) power to conduct studies regarding particular industries or practices that may inform future agency positions or enforcement priorities.  The investigation announced today is a concrete fact-gathering step by the FTC regarding the regulation of Generative AI.

What does herring fishing have to do with AI?

Herring fishing – of all things – could have a big impact on AI regulation in 2024. That is, cases brought by two herring fishing companies are before the Supreme Court that could have wide-reaching influence. The cases challenge actions taken by the National Marine Fisheries Service and longstanding Chevron deference. Under Chevron, courts afford deference to reasonable agency interpretations of ambiguous laws. At oral argument last week, the Court signaled a willingness to overturn Chevron deference. This is notable for the Artificial Intelligence space that lacks explicit legislation from Congress.  Indeed, the Executive Order on Artificial Intelligence last year is largely directed at Federal Agencies, instructing the agencies to take action. In the absence of Chevron deference, actions taken by agencies pursuant to that order could be more susceptible to legal challenge.  Justice Kagan even called out AI in oral argument as an area that could see effects from the Court’s ruling. The Supreme Court is expected to rule by the end of June.

The AI Update | October 5, 2023

#HelloWorld. Fall begins, and the Writers Guild strike ends. In this issue, we look at what that means for AI in Hollywood. We also run through a dizzying series of self-regulating steps AI tech players have undertaken. As the smell of pumpkin spice fills the air, let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

AI and the Writers Guild. Screenwriters ended a nearly five-month strike with a tentative new agreement good through mid-2026. The Minimum Basic Agreement (MBA) includes multiple AI-centric provisions. The AI-related highlights: Continue reading “The AI Update | October 5, 2023”

The AI Update | August 10, 2023

#HelloWorld. In this issue, the state of state AI laws (disclaimer: not our original phrase, although we wish it were). Deals for training data are in the works. And striking actors have made public their AI-related proposals—careful about those “Digital Replicas.” It’s August, but we’re not stopping. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

States continue to pass and propose AI bills. Sometimes you benefit from the keen, comprehensive efforts of others. In the second issue of The AI Update, we summarized state efforts to legislate in the AI space. Now, a dedicated team at EPIC, the Electronic Privacy Information Center, spent all summer assembling an update, “The State of State AI Laws: 2023,” a master(ful) list of all state laws enacted and bills proposed touching on AI. We highly recommend reading their easy-to-navigate online site, highlights below:

Continue reading “The AI Update | August 10, 2023”

The AI Update | July 27, 2023

#HelloWorld. Copyright suits are as unrelenting as the summer heat, with no relief in the forecast. AI creators are working on voluntary commitments to watermark synthetic content. And meanwhile, is ChatGPT getting “stupider”? Lots to explore. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Big names portend big lawsuits. Since ChatGPT’s public launch in November 2022, plaintiffs have filed eight major cases in federal court—mostly in tech-centric Northern California—accusing large language models and image generators of copyright infringement, Digital Millennium Copyright Act violations, unfair competition, statutory and common law privacy violations, and other assorted civil torts. (Fancy a summary spreadsheet? Drop us a line.)

Here comes another steak for the grill: This month, on CBS’ “Face the Nation,” IAC’s chairman Barry Diller previewed that “leading publishers” were constructing copyright cases against generative AI tech companies, viewing it as a lynchpin for arriving at a viable business model: “yes, we have to do it. It’s not antagonistic. It’s to stake a firm place in the ground to say that you cannot ingest our material without figuring out a business model for the future.” Semafor later reported that The New York Times, News Corp., and Axel Springer were all among this group of likely publishing company plaintiffs, worried about the loss of website traffic that would come from generative AI answers replacing search engine results and looking for “billions, not millions, from AI.”

Continue reading “The AI Update | July 27, 2023”

The AI Update | July 13, 2023

#HelloWorld. Pushback and disruption are the themes of this edition as we look at objections to proposed regulation in Europe, an FTC investigation, the growing movement in support of uncensored chatbots, and how AI is disrupting online advertising. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Pushback against AI regulation. The AI Update has followed closely the progress of the European Union’s proposed AI Act. Today we report on pushback in the form of an open letter from representatives of companies that operate in Europe expressing “serious concerns” that the AI Act would “jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” The letter takes aim in particular at the proposed “high risk” treatment of generative AI models, worrying that “disproportionate compliance costs and disproportionate liability risks” will push companies out of Europe and harm the ability of the EU to be at the forefront of AI development. The ask from the signatories is that European legislation “confine itself to stating broad principles in a risk-based approach.” As we have explained, there is a long road and many negotiations ahead before any version of the AI Act becomes the law in Europe. So it remains to be seen whether any further revisions reflect these concerns. Continue reading “The AI Update | July 13, 2023”

The AI Update | June 14, 2023

#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.

Continue reading “The AI Update | June 14, 2023”

The AI Update | May 31, 2023

#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”

We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress