The AI Update | May 16, 2023

#HelloWorld. In this issue, we survey the tech industry’s private self-regulation—what model developers and online platforms have implemented as restrictions on AI usage. Also, one court in the U.S. hints at how much copying by an AI model is too much and the EU releases its most recent amendments to the AI Act. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Industry self-regulation: Past AI Updates have summarized legislative and regulatory initiatives around the newest AI architectures—LLMs and other “foundation models” like GPT and Midjourney. In the meantime, LLM developers and users have not stood still. In our last issue, we discussed OpenAI’s new user opt-out procedures. While comprehensive private standards feel a long way off, here’s what a few other industry players are doing:

    • Anthropic: Last week, Anthropic, a foundation model developer, announced its “Constitutional AI” system. The core idea is to use one AI model to evaluate and critique the output of another according to a set of values explicitly provided to the system. One such broad value Anthropic champions—harmlessness to the user: “Please choose the assistant response that is as harmless and ethical as possible. Do NOT choose responses that are toxic, racist, or sexist.” The devil, of course, is in the implementation details.
    • Salesforce: In a similar vein, enterprise software provider Salesforce recently released “Guidelines for Responsible Development” of “Generative AI.” The most granular guidance relates to promoting accuracy of the AI model’s responses: The guidelines recommend citing sources and explicitly labeling answers the user should double check, like “statistics” and “dates.”

Continue reading “The AI Update | May 16, 2023”

The AI Update | May 2, 2023

#HelloWorld. We originally thought this edition would focus on OpenAI’s attempts to self-regulate GPT usage, but the European Union had other plans for us. This past Thursday, news broke of an agreement to add generative AI tools to the AI Act, the EU’s centerpiece AI legislation. So today’s issue starts there, before discussing OpenAI’s and others’ recent announcements regarding training data access and usage. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

The EU’s Artificial Intelligence Act: The EU has been debating a proposed AI Act since 2018. In 2021, it published a legislative framework that would classify AI products into one of four categories: unacceptable risk (and therefore forbidden); high risk (and therefore subject to regular risk assessments, independent testing, transparency disclosures, and strict data governance requirements); limited risk; and minimal risk. But this approach was developed before so-called “foundation models”—LLMs like ChatGPT and image generators like DALL-E and MidJourney—exploded into the public consciousness. So questions remained about whether the AI Act would be adjusted to accommodate this new reality.

Continue reading “The AI Update | May 2, 2023”

The AI Update | April 18, 2023

from the Duane Morris Technology, Media & Telecom Group

#HelloWorld. In this edition, momentum picks up in Congress, the executive branch, and the states to regulate AI, while more intellectual-property litigation may be on the horizon. Overseas, governments continue to be wary of the new large AI models. It’s getting complicated. Let’s stay smart together. 

Proposed legislation in the U.S.: Senate Majority Leader Chuck Schumer (D-N.Y.) revealed that his office has met with AI experts to develop a framework for AI legislation for release in the coming weeks. The proposal’s centerpiece would require independent experts to test AI technologies before their public launch and would permit users to access those independent assessments.

This is not the only AI-related legislative effort to have emerged from Congress. Last year, Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.), and Representative Yvette Clarke (D-N.Y.) proposed the Algorithmic Accountability Act of 2022, focused on “automated decision systems” using AI algorithms to make “critical decisions” relating to e.g. education, employment, healthcare, and public benefits. The proposal would require these AI systems to undergo regular “impact assessments,” under the general supervision of the Federal Trade Commission. This bill has not yet emerged from committee.

Continue reading “The AI Update | April 18, 2023”

Can A Machine Be Creative?

On March 16, 2023, the United States Copyright Office (USCO) published Copyright Registration Guidance (Guidance) on generative AI[1]. In the Guidance, the USCO reminded us that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” This statement curiously conjures the notion of a machine creating copyrightable works autonomously.

While the operation of a machine, or specifically the execution of the underlying AI technology, may be largely mechanical with little human involvement, the design of the AI technology can take significant human effort. If we look at protecting human works that power machines as intellectual property in the broad context where AI has been applied, just like authorship has been an issue when an AI technology is used in creating copyrightable subject matter, inventorship has been an issue when an AI technology is used in generating an idea that may be eligible for patent protection. Unlike the evaluation of authorship, though, the assessment of inventorship puts human contribution to the AI technology front and center[2]. Without getting into the reasons for this difference in treatment, let’s consider the question of whether an AI technology used in creating copyrightable subject matter, or specifically the human contribution to such an AI technology, generally does or does not provide any “creative input.”

Continue reading “Can A Machine Be Creative?”

The AI Update | April 4, 2023

from the Duane Morris Technology, Media & Telecom Group

#HelloWorld. Welcome to the first edition of The AI Update. Every other week, we’ll provide you with a curated summary of the most relevant, impactful legal developments in the world of AI. Let’s stay smart together.

Our mission: Since ChatGPT’s public launch last November, the onslaught of AI-related news has been daily and relentless. We’ve guided our clients one-on-one about legal developments, the knowns and unknowns, and what we see coming down the road. So much so that a centralized information exchange—this newsletter—feels like a logical next step. Why every two weeks and why only one page? So as not to continue the flood. What if you want more detail? Contact us individually and we’ll get you up to speed. There’s a lot of noise out there; we try to focus on the signal.

Regulatory activity in the U.S.: For now, two agencies have emerged as the most vocal in the AI space (at least in public). The Copyright Office released guidance on how to seek copyright protection for works created with generative-AI assistance. In short: a human author is required and any AI-tool use must be disclosed and disclaimed. The Office is also holding a series of public listening sessions on this and other related AI topics, like the use of copyrighted works to train AI models. The sessions start on April 19—stay tuned for highlights in future editions of The AI Update.

Continue reading “The AI Update | April 4, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress