The AI Update | June 14, 2023

#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.

Continue reading “The AI Update | June 14, 2023”

Webinar: Liability Considerations in Enterprise Use of Generative AI

Duane Morris partner Alex Goranin will be moderating the webinar “Liability Considerations in Enterprise Use of Generative AI” on June 27, hosted by The Copyright Society.

For more information and to register, visit The Copyright Society website.

About the Program

Since ChatGPT burst onto the scene last fall, developers of large language and other foundation models have raced to release new versions; the number of app developers building on top of the models has mushroomed; and companies large and small have considered—and reconsidered—approaches to integrating generative AI tools within their businesses. With these decisions has come a cascade of practical business risks, and copyright and copyright-adjacent issues have taken center stage. After all, if your marketing team’s Midjourney-like AI image generator outputs artwork later accused of infringement, who is ultimately responsible? And how can you mitigate that risk—through contractual indemnity? through guardrails deployed in your training process? through post-hoc content moderation?

Speakers

    • Alex Goranin, Intellectual Property Litigator, Duane Morris LLP
    • Peter Henderson, Stanford University
    • Jess Miers, Advocacy Counsel, Chamber of Progress
    • Alex Rindels, Corporate Counsel, Jasper

Protecting Your Company’s Online Data

Digital data is becoming a hot commodity these days because it enables AI tools to do powerful things. Companies that offer content should keep up with the evolving technology and laws that can help them protect their online data.

As data becomes available online, it can be accessed in different ways leading to various legal issues. In general, one basis for protecting online data lies in the creativity of the data under the Copyright Act of 1976. Another basis lies in the technological barrier of the computer system hosting the data under the Computer Fraud and Abuse Act (CFAA) and Digital Millennium Copyright Act. It is also possible to protect online data based on contractual obligations or tort principles under state common law. In terms of the data, a company would need to consider its proprietary data and user-generated data separately, but any creative content is invariably entitled to copyright protection. Without owning the data, the company can still enforce the copyright via an exclusive license from its users. In terms of the computer system, a company could evaluate different security measures for restricting access to the data without severely sacrificing visibility and usability of the company, the data and/or the computer system.

In a typical scenario, a company may make its data accessible to the public as is, publicly available in an obscured or tracked form, and/or accessible only to a select group. Let’s consider these scenarios separately.

Continue reading “Protecting Your Company’s Online Data”

Promoting AI Use in Developing Medical Devices

The U.S. Food and Drug Administration (FDA) has issued a draft guidance intended to promote the development of safe and effective medical devices that use a type of artificial intelligence (AI) known as machine learning (ML). The draft guidance further develops FDA’s least burdensome regulatory approach for AI/ML-enabled device software functions (ML-DSFs), which aims to increase the pace of innovation while maintaining safety and effectiveness.

Read the full Alert on the Duane Morris website.

The AI Update | May 31, 2023

#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”

We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”

Employer Guidance for Preventing Discrimination When Using AI

On May 18, 2023, the EEOC released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” (hereinafter, the “Resource”) to provide employers guidance on preventing discrimination when utilizing artificial intelligence. For employers who are contemplating whether to use artificial intelligence in employment matters such as selecting new employees, monitoring performance, and determining pay or promotions, this report is a “must-read” in terms of implementing safeguards to comply with civil rights laws.

Read more on the Class Action Defense Blog.

The AI Update | May 16, 2023

#HelloWorld. In this issue, we survey the tech industry’s private self-regulation—what model developers and online platforms have implemented as restrictions on AI usage. Also, one court in the U.S. hints at how much copying by an AI model is too much and the EU releases its most recent amendments to the AI Act. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Industry self-regulation: Past AI Updates have summarized legislative and regulatory initiatives around the newest AI architectures—LLMs and other “foundation models” like GPT and Midjourney. In the meantime, LLM developers and users have not stood still. In our last issue, we discussed OpenAI’s new user opt-out procedures. While comprehensive private standards feel a long way off, here’s what a few other industry players are doing:

    • Anthropic: Last week, Anthropic, a foundation model developer, announced its “Constitutional AI” system. The core idea is to use one AI model to evaluate and critique the output of another according to a set of values explicitly provided to the system. One such broad value Anthropic champions—harmlessness to the user: “Please choose the assistant response that is as harmless and ethical as possible. Do NOT choose responses that are toxic, racist, or sexist.” The devil, of course, is in the implementation details.
    • Salesforce: In a similar vein, enterprise software provider Salesforce recently released “Guidelines for Responsible Development” of “Generative AI.” The most granular guidance relates to promoting accuracy of the AI model’s responses: The guidelines recommend citing sources and explicitly labeling answers the user should double check, like “statistics” and “dates.”

Continue reading “The AI Update | May 16, 2023”

The AI Update | May 2, 2023

#HelloWorld. We originally thought this edition would focus on OpenAI’s attempts to self-regulate GPT usage, but the European Union had other plans for us. This past Thursday, news broke of an agreement to add generative AI tools to the AI Act, the EU’s centerpiece AI legislation. So today’s issue starts there, before discussing OpenAI’s and others’ recent announcements regarding training data access and usage. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

The EU’s Artificial Intelligence Act: The EU has been debating a proposed AI Act since 2018. In 2021, it published a legislative framework that would classify AI products into one of four categories: unacceptable risk (and therefore forbidden); high risk (and therefore subject to regular risk assessments, independent testing, transparency disclosures, and strict data governance requirements); limited risk; and minimal risk. But this approach was developed before so-called “foundation models”—LLMs like ChatGPT and image generators like DALL-E and MidJourney—exploded into the public consciousness. So questions remained about whether the AI Act would be adjusted to accommodate this new reality.

Continue reading “The AI Update | May 2, 2023”

The AI Update | April 18, 2023

from the Duane Morris Technology, Media & Telecom Group

#HelloWorld. In this edition, momentum picks up in Congress, the executive branch, and the states to regulate AI, while more intellectual-property litigation may be on the horizon. Overseas, governments continue to be wary of the new large AI models. It’s getting complicated. Let’s stay smart together. 

Proposed legislation in the U.S.: Senate Majority Leader Chuck Schumer (D-N.Y.) revealed that his office has met with AI experts to develop a framework for AI legislation for release in the coming weeks. The proposal’s centerpiece would require independent experts to test AI technologies before their public launch and would permit users to access those independent assessments.

This is not the only AI-related legislative effort to have emerged from Congress. Last year, Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.), and Representative Yvette Clarke (D-N.Y.) proposed the Algorithmic Accountability Act of 2022, focused on “automated decision systems” using AI algorithms to make “critical decisions” relating to e.g. education, employment, healthcare, and public benefits. The proposal would require these AI systems to undergo regular “impact assessments,” under the general supervision of the Federal Trade Commission. This bill has not yet emerged from committee.

Continue reading “The AI Update | April 18, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress