#HelloWorld. In the midst of summer, the pace of significant AI legal and regulatory news has mercifully slackened. With room to breathe, this issue points the lens in a different direction, at some of our persistent AI-related obsessions and recurrent themes. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Stanford is on top of the foundation model evaluation game. Dedicated readers may have picked up on our love of the Stanford Center for Research on Foundation Models. The Center’s 2021 paper, “On the Opportunities and Risks of Foundation Models,” is long, but it coined the term “foundation models” to cover the new transformer LLM and diffusion image generator architectures dominating the headlines. The paper exhaustively examines these models’ capabilities; underlying technologies; applications in medicine, law, and education; and potential social impacts. In a downpour of hype and speculation, the Center’s empirical, fact-forward thinking provides welcome shelter.
Now, like techno-Britney Spears, the Center has done it again. (The AI Update’s human writers can, like LLMs, generate dad jokes.) With the European Parliament’s mid-June adoption of the EU AI Act (setting the stage for further negotiation), researchers at the Center asked this question: To what extent would the current LLM and image-generation models be compliant with the EU AI Act’s proposed regulatory rules for foundation models, mainly set out in Article 28? The answer: None right now. But open-source start-up Hugging Face’s BLOOM model ranked highest under the Center’s scoring system, getting 36 out of 48 total possible points. The scores of Google’s PaLM 2, OpenAI’s GPT-4, Stability.ai’s Stable Diffusion, and Meta’s LLaMA models, in contrast, all hovered in the 20s.
Continue reading “The AI Update | June 29, 2023”
#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.
Continue reading “The AI Update | June 14, 2023”
#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”
We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”
#HelloWorld. In this issue, we survey the tech industry’s private self-regulation—what model developers and online platforms have implemented as restrictions on AI usage. Also, one court in the U.S. hints at how much copying by an AI model is too much and the EU releases its most recent amendments to the AI Act. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).
Industry self-regulation: Past AI Updates have summarized legislative and regulatory initiatives around the newest AI architectures—LLMs and other “foundation models” like GPT and Midjourney. In the meantime, LLM developers and users have not stood still. In our last issue, we discussed OpenAI’s new user opt-out procedures. While comprehensive private standards feel a long way off, here’s what a few other industry players are doing:
- Anthropic: Last week, Anthropic, a foundation model developer, announced its “Constitutional AI” system. The core idea is to use one AI model to evaluate and critique the output of another according to a set of values explicitly provided to the system. One such broad value Anthropic champions—harmlessness to the user: “Please choose the assistant response that is as harmless and ethical as possible. Do NOT choose responses that are toxic, racist, or sexist.” The devil, of course, is in the implementation details.
- Salesforce: In a similar vein, enterprise software provider Salesforce recently released “Guidelines for Responsible Development” of “Generative AI.” The most granular guidance relates to promoting accuracy of the AI model’s responses: The guidelines recommend citing sources and explicitly labeling answers the user should double check, like “statistics” and “dates.”
Continue reading “The AI Update | May 16, 2023”
#HelloWorld. We originally thought this edition would focus on OpenAI’s attempts to self-regulate GPT usage, but the European Union had other plans for us. This past Thursday, news broke of an agreement to add generative AI tools to the AI Act, the EU’s centerpiece AI legislation. So today’s issue starts there, before discussing OpenAI’s and others’ recent announcements regarding training data access and usage. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).
The EU’s Artificial Intelligence Act: The EU has been debating a proposed AI Act since 2018. In 2021, it published a legislative framework that would classify AI products into one of four categories: unacceptable risk (and therefore forbidden); high risk (and therefore subject to regular risk assessments, independent testing, transparency disclosures, and strict data governance requirements); limited risk; and minimal risk. But this approach was developed before so-called “foundation models”—LLMs like ChatGPT and image generators like DALL-E and MidJourney—exploded into the public consciousness. So questions remained about whether the AI Act would be adjusted to accommodate this new reality.
Continue reading “The AI Update | May 2, 2023”
from the Duane Morris Technology, Media & Telecom Group
#HelloWorld. In this edition, momentum picks up in Congress, the executive branch, and the states to regulate AI, while more intellectual-property litigation may be on the horizon. Overseas, governments continue to be wary of the new large AI models. It’s getting complicated. Let’s stay smart together.
Proposed legislation in the U.S.: Senate Majority Leader Chuck Schumer (D-N.Y.) revealed that his office has met with AI experts to develop a framework for AI legislation for release in the coming weeks. The proposal’s centerpiece would require independent experts to test AI technologies before their public launch and would permit users to access those independent assessments.
This is not the only AI-related legislative effort to have emerged from Congress. Last year, Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.), and Representative Yvette Clarke (D-N.Y.) proposed the Algorithmic Accountability Act of 2022, focused on “automated decision systems” using AI algorithms to make “critical decisions” relating to e.g. education, employment, healthcare, and public benefits. The proposal would require these AI systems to undergo regular “impact assessments,” under the general supervision of the Federal Trade Commission. This bill has not yet emerged from committee.
Continue reading “The AI Update | April 18, 2023”