Can a Human Behind AI Be Creative?

The Copyright Registration Guidance (Guidance) published by the United States Copyright Office in March mainly addressed whether a human providing simple prompts or other input to an artificial intelligence (AI) algorithm could obtain a copyright registration for the output that the AI algorithm generated based on the human input. Working with AI algorithms all the time, I previously discussed whether the creator of the AI algorithm, and not the user, could obtain a copyright registration for that output. Now a few months later, a court has handed out a decision on whether to grant a copyright registration to the AI algorithm in Thaler v. Perlmutter, 1:22-cv-01564 (D.D.C).

That’s right. The court was confronted with the issue of whether to grant a copyright registration to the AI algorithm or the machine running the AI algorithm, rather than the creator of the AI algorithm. The plaintiff in this case has been a proponent of giving credit to machines running the plaintiff’s AI algorithms instead of the plaintiff directly, regardless of whether the AI algorithms output more algorithms or artworks. See Thaler v. Vidal, No. 21-2347 (Fed. Cir. 2022).

To support the position that the plaintiff’s machine should be granted a copyright registration, the plaintiff consistently represented in the copyright application that the AI algorithm generated the work “autonomously” and that the plaintiff played “no role” in the generation. This representation undermines any creative effort that the plaintiff may have made in producing the work. In general, while an AI algorithm once developed may be executed autonomously without human intervention, the AI algorithm was not developed in a vacuum and a human could have incorporated various creative elements into the AI algorithm, as discussed in my previous blog post.

Continue reading “Can a Human Behind AI Be Creative?”

The AI Update | August 29, 2023

#HelloWorld. In this issue, ChatGPT cannot be the next John Grisham, the secret is out on The New York Times’ frustrations with generative AI, and YouTube looks to a technological fix for voice replicas. Summer may soon be over, but AI issues are not going anywhere. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

AI cannot be a copyright author—for now. In one of the most-awaited copyright events of the summer (not Barbie-related), the federal district court in D.C. held that an AI system could not be deemed the author of a synthetically-generated artwork. This was a test case brought by Stephen Thaler, a computer scientist and passionate advocate for treating AI as both copyright author and patent inventor, notwithstanding its silicon- and software-based essence. The D.C. district court, however, held firm to the policy position taken by the U.S. Copyright Office—copyright protects humans alone. In the words of the court: “human authorship is an essential part of a valid copyright claim.” Those who have followed Thaler’s efforts will remember that, about a year ago, the Federal Circuit similarly rejected Thaler’s attempt to list an AI model as an “inventor” on a patent application, holding instead that an inventor must be a “natural person.” Continue reading “The AI Update | August 29, 2023”

AI Software Settlement Highlights Risk in Hiring Decisions

In Equal Employment Opportunity Commission v. ITutorGroup, Inc., et al., No. 1:22-CV-2565 (E.D.N.Y. Aug. 9, 2023), the EEOC and a tutoring company filed a Joint Settlement Agreement and Consent Decree in the U.S. District Court for the Eastern District of New York, memorializing a $365,000 settlement for claims involving hiring software that automatically rejected applicants based on their age. This is first EEOC settlement involving artificial intelligence (“AI”) software bias.

Read more on the Class Action Defense Blog.

The AI Update | August 10, 2023

#HelloWorld. In this issue, the state of state AI laws (disclaimer: not our original phrase, although we wish it were). Deals for training data are in the works. And striking actors have made public their AI-related proposals—careful about those “Digital Replicas.” It’s August, but we’re not stopping. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

States continue to pass and propose AI bills. Sometimes you benefit from the keen, comprehensive efforts of others. In the second issue of The AI Update, we summarized state efforts to legislate in the AI space. Now, a dedicated team at EPIC, the Electronic Privacy Information Center, spent all summer assembling an update, “The State of State AI Laws: 2023,” a master(ful) list of all state laws enacted and bills proposed touching on AI. We highly recommend reading their easy-to-navigate online site, highlights below:

Continue reading “The AI Update | August 10, 2023”

AI Tools in the Workplace and the Americans with Disabilities Act

On July 26, 2023, the EEOC issued a new Guidance entitled “Visual Disabilities in the Workplace and the Americans with Disabilities Act” (the “Guidance”).  This document is an excellent resource for employers, and provides insight into how to handle situations that may arise with job applicants and employees that have visual disabilities. Notably, for employers that use algorithms or artificial intelligence (“AI”) as a decision-making tool, the Guidance makes clear that employers have an obligation to make reasonable accommodations for applicants or employees with visual disabilities who request them in connection with these technologies.

Read more on the Class Action Defense Blog.

 

The AI Update | July 27, 2023

#HelloWorld. Copyright suits are as unrelenting as the summer heat, with no relief in the forecast. AI creators are working on voluntary commitments to watermark synthetic content. And meanwhile, is ChatGPT getting “stupider”? Lots to explore. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).

Big names portend big lawsuits. Since ChatGPT’s public launch in November 2022, plaintiffs have filed eight major cases in federal court—mostly in tech-centric Northern California—accusing large language models and image generators of copyright infringement, Digital Millennium Copyright Act violations, unfair competition, statutory and common law privacy violations, and other assorted civil torts. (Fancy a summary spreadsheet? Drop us a line.)

Here comes another steak for the grill: This month, on CBS’ “Face the Nation,” IAC’s chairman Barry Diller previewed that “leading publishers” were constructing copyright cases against generative AI tech companies, viewing it as a lynchpin for arriving at a viable business model: “yes, we have to do it. It’s not antagonistic. It’s to stake a firm place in the ground to say that you cannot ingest our material without figuring out a business model for the future.” Semafor later reported that The New York Times, News Corp., and Axel Springer were all among this group of likely publishing company plaintiffs, worried about the loss of website traffic that would come from generative AI answers replacing search engine results and looking for “billions, not millions, from AI.”

Continue reading “The AI Update | July 27, 2023”

Taking Heed of AI Contracts

Duane Morris partner Neville M. Bilimoria authored the McKnight’s Long-Term Care article, “AI is everywhere! Addressing the legal risks through contracting.”

Mr. Bilimoria writes:

You can’t look in the news or see social media posts each day without hearing about artificial intelligence in healthcare. In fact, the advancements in AI in healthcare are making leaps and bounds, seemingly with each day that goes by.

But nursing homes and assisted living providers need to understand not just the benefits of how AI can improve quality of resident care and improved operations, but also the legal issues surrounding AI in your facility.

Read the full article on the McKnight’s Long-Term Care website.

U.S. Antitrust Draft Guidelines Address Using AI in Mergers

On July 19, 2023, the Department of Justice and the Federal Trade Commission (FTC) jointly released draft Merger Guidelines to amend and update both the 2010 Horizontal Merger Guidelines and the Vertical Merger Guidelines that were issued in 2020 and later rescinded by the FTC in 2021.

The draft guidelines underscore recent enforcement efforts to rein in technology mergers. They target large platform providers, as well as mergers that might entrench or extend a dominant position (suggesting that a 30 percent share implies a dominant position). The draft guidelines focus on multisided platforms and competition between platforms, on the platform and to displace a platform. The agencies also specifically reference the use of algorithms and artificial intelligence in assessing potential post-merger coordination. Read the full Alert on the Duane Morris website.

The AI Update | July 13, 2023

#HelloWorld. Pushback and disruption are the themes of this edition as we look at objections to proposed regulation in Europe, an FTC investigation, the growing movement in support of uncensored chatbots, and how AI is disrupting online advertising. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Pushback against AI regulation. The AI Update has followed closely the progress of the European Union’s proposed AI Act. Today we report on pushback in the form of an open letter from representatives of companies that operate in Europe expressing “serious concerns” that the AI Act would “jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” The letter takes aim in particular at the proposed “high risk” treatment of generative AI models, worrying that “disproportionate compliance costs and disproportionate liability risks” will push companies out of Europe and harm the ability of the EU to be at the forefront of AI development. The ask from the signatories is that European legislation “confine itself to stating broad principles in a risk-based approach.” As we have explained, there is a long road and many negotiations ahead before any version of the AI Act becomes the law in Europe. So it remains to be seen whether any further revisions reflect these concerns. Continue reading “The AI Update | July 13, 2023”

The AI Update | June 29, 2023

#HelloWorld. In the midst of summer, the pace of significant AI legal and regulatory news has mercifully slackened. With room to breathe, this issue points the lens in a different direction, at some of our persistent AI-related obsessions and recurrent themes. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Stanford is on top of the foundation model evaluation game. Dedicated readers may have picked up on our love of the Stanford Center for Research on Foundation Models. The Center’s 2021 paper, “On the Opportunities and Risks of Foundation Models,” is long, but it coined the term “foundation models” to cover the new transformer LLM and diffusion image generator architectures dominating the headlines. The paper exhaustively examines these models’ capabilities; underlying technologies; applications in medicine, law, and education; and potential social impacts. In a downpour of hype and speculation, the Center’s empirical, fact-forward thinking provides welcome shelter.

Now, like techno-Britney Spears, the Center has done it again. (The AI Update’s human writers can, like LLMs, generate dad jokes.) With the European Parliament’s mid-June adoption of the EU AI Act (setting the stage for further negotiation), researchers at the Center asked this question: To what extent would the current LLM and image-generation models be compliant with the EU AI Act’s proposed regulatory rules for foundation models, mainly set out in Article 28? The answer: None right now. But open-source start-up Hugging Face’s BLOOM model ranked highest under the Center’s scoring system, getting 36 out of 48 total possible points. The scores of Google’s PaLM 2, OpenAI’s GPT-4, Stability.ai’s Stable Diffusion, and Meta’s LLaMA models, in contrast, all hovered in the 20s.

Continue reading “The AI Update | June 29, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress