The AI Update | July 27, 2023

#HelloWorld. Copyright suits are as unrelenting as the summer heat, with no relief in the forecast. AI creators are working on voluntary commitments to watermark synthetic content. And meanwhile, is ChatGPT getting “stupider”? Lots to explore. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Big names portend big lawsuits. Since ChatGPT’s public launch in November 2022, plaintiffs have filed eight major cases in federal court—mostly in tech-centric Northern California—accusing large language models and image generators of copyright infringement, Digital Millennium Copyright Act violations, unfair competition, statutory and common law privacy violations, and other assorted civil torts. (Fancy a summary spreadsheet? Drop us a line.)

Here comes another steak for the grill: This month, on CBS’ “Face the Nation,” IAC’s chairman Barry Diller previewed that “leading publishers” were constructing copyright cases against generative AI tech companies, viewing it as a lynchpin for arriving at a viable business model: “yes, we have to do it. It’s not antagonistic. It’s to stake a firm place in the ground to say that you cannot ingest our material without figuring out a business model for the future.” Semafor later reported that The New York Times, News Corp., and Axel Springer were all among this group of likely publishing company plaintiffs, worried about the loss of website traffic that would come from generative AI answers replacing search engine results and looking for “billions, not millions, from AI.”

Continue reading “The AI Update | July 27, 2023”

Taking Heed of AI Contracts

Duane Morris partner Neville M. Bilimoria authored the McKnight’s Long-Term Care article, “AI is everywhere! Addressing the legal risks through contracting.”

Mr. Bilimoria writes:

You can’t look in the news or see social media posts each day without hearing about artificial intelligence in healthcare. In fact, the advancements in AI in healthcare are making leaps and bounds, seemingly with each day that goes by.

But nursing homes and assisted living providers need to understand not just the benefits of how AI can improve quality of resident care and improved operations, but also the legal issues surrounding AI in your facility.

Read the full article on the McKnight’s Long-Term Care website.

U.S. Antitrust Draft Guidelines Address Using AI in Mergers

On July 19, 2023, the Department of Justice and the Federal Trade Commission (FTC) jointly released draft Merger Guidelines to amend and update both the 2010 Horizontal Merger Guidelines and the Vertical Merger Guidelines that were issued in 2020 and later rescinded by the FTC in 2021.

The draft guidelines underscore recent enforcement efforts to rein in technology mergers. They target large platform providers, as well as mergers that might entrench or extend a dominant position (suggesting that a 30 percent share implies a dominant position). The draft guidelines focus on multisided platforms and competition between platforms, on the platform and to displace a platform. The agencies also specifically reference the use of algorithms and artificial intelligence in assessing potential post-merger coordination. Read the full Alert on the Duane Morris website.

The AI Update | July 13, 2023

#HelloWorld. Pushback and disruption are the themes of this edition as we look at objections to proposed regulation in Europe, an FTC investigation, the growing movement in support of uncensored chatbots, and how AI is disrupting online advertising. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Pushback against AI regulation. The AI Update has followed closely the progress of the European Union’s proposed AI Act. Today we report on pushback in the form of an open letter from representatives of companies that operate in Europe expressing “serious concerns” that the AI Act would “jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” The letter takes aim in particular at the proposed “high risk” treatment of generative AI models, worrying that “disproportionate compliance costs and disproportionate liability risks” will push companies out of Europe and harm the ability of the EU to be at the forefront of AI development. The ask from the signatories is that European legislation “confine itself to stating broad principles in a risk-based approach.” As we have explained, there is a long road and many negotiations ahead before any version of the AI Act becomes the law in Europe. So it remains to be seen whether any further revisions reflect these concerns. Continue reading “The AI Update | July 13, 2023”

The AI Update | June 29, 2023

#HelloWorld. In the midst of summer, the pace of significant AI legal and regulatory news has mercifully slackened. With room to breathe, this issue points the lens in a different direction, at some of our persistent AI-related obsessions and recurrent themes. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Stanford is on top of the foundation model evaluation game. Dedicated readers may have picked up on our love of the Stanford Center for Research on Foundation Models. The Center’s 2021 paper, “On the Opportunities and Risks of Foundation Models,” is long, but it coined the term “foundation models” to cover the new transformer LLM and diffusion image generator architectures dominating the headlines. The paper exhaustively examines these models’ capabilities; underlying technologies; applications in medicine, law, and education; and potential social impacts. In a downpour of hype and speculation, the Center’s empirical, fact-forward thinking provides welcome shelter.

Now, like techno-Britney Spears, the Center has done it again. (The AI Update’s human writers can, like LLMs, generate dad jokes.) With the European Parliament’s mid-June adoption of the EU AI Act (setting the stage for further negotiation), researchers at the Center asked this question: To what extent would the current LLM and image-generation models be compliant with the EU AI Act’s proposed regulatory rules for foundation models, mainly set out in Article 28? The answer: None right now. But open-source start-up Hugging Face’s BLOOM model ranked highest under the Center’s scoring system, getting 36 out of 48 total possible points. The scores of Google’s PaLM 2, OpenAI’s GPT-4, Stability.ai’s Stable Diffusion, and Meta’s LLaMA models, in contrast, all hovered in the 20s.

Continue reading “The AI Update | June 29, 2023”

The AI Update | June 14, 2023

#HelloWorld. Regulatory hearings and debates were less prominent these past two weeks, so in this issue we turn to a potpourri of private AI industry developments. The Authors Guild releases new model contract clauses limiting generative AI uses; big tech companies provide AI customers with a series of promises and tips, at varying levels of abstraction; and the Section 230 safe harbor is ready for its spotlight. Plus, ChatGPT is no barrel of laughs—actually, same barrel, same laughs. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

The Authors Guild adds new model clauses. Back in March, the Authors Guild recommended that authors insert a new model clause in their contracts with publishers prohibiting use of the authors’ work for “training artificial intelligence to generate text.” Platforms and publishers have increasingly seen this language pop up in their negotiations with authors. Now the Authors Guild is at it again. On June 1, the organization announced four new model clauses that would require an author to disclose that a manuscript includes AI-generated text; place limits (to be specified in negotiation) on the amount of synthetic text that an author’s manuscript can include; prohibit publishers from using AI narrators for audio books, absent the author’s consent; and proscribe publishers from employing AI to generate translations, book covers, or interior art, again absent consent.

Continue reading “The AI Update | June 14, 2023”

Webinar: Liability Considerations in Enterprise Use of Generative AI

Duane Morris partner Alex Goranin will be moderating the webinar “Liability Considerations in Enterprise Use of Generative AI” on June 27, hosted by The Copyright Society.

For more information and to register, visit The Copyright Society website.

About the Program

Since ChatGPT burst onto the scene last fall, developers of large language and other foundation models have raced to release new versions; the number of app developers building on top of the models has mushroomed; and companies large and small have considered—and reconsidered—approaches to integrating generative AI tools within their businesses. With these decisions has come a cascade of practical business risks, and copyright and copyright-adjacent issues have taken center stage. After all, if your marketing team’s Midjourney-like AI image generator outputs artwork later accused of infringement, who is ultimately responsible? And how can you mitigate that risk—through contractual indemnity? through guardrails deployed in your training process? through post-hoc content moderation?

Speakers

    • Alex Goranin, Intellectual Property Litigator, Duane Morris LLP
    • Peter Henderson, Stanford University
    • Jess Miers, Advocacy Counsel, Chamber of Progress
    • Alex Rindels, Corporate Counsel, Jasper

Protecting Your Company’s Online Data

Digital data is becoming a hot commodity these days because it enables AI tools to do powerful things. Companies that offer content should keep up with the evolving technology and laws that can help them protect their online data.

As data becomes available online, it can be accessed in different ways leading to various legal issues. In general, one basis for protecting online data lies in the creativity of the data under the Copyright Act of 1976. Another basis lies in the technological barrier of the computer system hosting the data under the Computer Fraud and Abuse Act (CFAA) and Digital Millennium Copyright Act. It is also possible to protect online data based on contractual obligations or tort principles under state common law. In terms of the data, a company would need to consider its proprietary data and user-generated data separately, but any creative content is invariably entitled to copyright protection. Without owning the data, the company can still enforce the copyright via an exclusive license from its users. In terms of the computer system, a company could evaluate different security measures for restricting access to the data without severely sacrificing visibility and usability of the company, the data and/or the computer system.

In a typical scenario, a company may make its data accessible to the public as is, publicly available in an obscured or tracked form, and/or accessible only to a select group. Let’s consider these scenarios separately.

Continue reading “Protecting Your Company’s Online Data”

Promoting AI Use in Developing Medical Devices

The U.S. Food and Drug Administration (FDA) has issued a draft guidance intended to promote the development of safe and effective medical devices that use a type of artificial intelligence (AI) known as machine learning (ML). The draft guidance further develops FDA’s least burdensome regulatory approach for AI/ML-enabled device software functions (ML-DSFs), which aims to increase the pace of innovation while maintaining safety and effectiveness.

Read the full Alert on the Duane Morris website.

The AI Update | May 31, 2023

#HelloWorld. In this issue, we head to Capitol Hill and summarize key takeaways from May’s Senate and House Judiciary subcommittee hearings on generative AI. We also visit California, to check in on the Writers Guild strike, and drop in on an online fan fiction community, the Omegaverse, to better understand the vast number of online data sources used in LLM training. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Printing press, atomic bomb—or something else? On consecutive days in mid-May, both Senate and House Judiciary subcommittees held the first of what they promised would be a series of hearings on generative AI regulation. The Senate session (full video here) focused on AI oversight more broadly, with OpenAI CEO Sam Altman’s earnest testimony capturing many a headline. The House proceeding (full video here) zeroed in on copyright issues—the “interoperability of AI and copyright law.”

We watched all five-plus hours of testimony so you don’t have to. Here are the core takeaways from the sessions: Continue reading “The AI Update | May 31, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress