Mitigating AI Bias with Responsible AI Design

Now that artificial intelligence (AI) is employed widely with unprecedented consequences, there is quite a scramble to implement mitigating measures. For example, the United Trademark and Patent Office (USPTO) is soliciting public comments on what steps the USPTO should take to mitigate harms and risks from AI-enabled invention. Many of the proposed guardrails are applicable to the deployment of AI technology, to conform original output of the AI technology to desired principles, policies, guidelines, etc. However, it is no less valuable to improve the design of the AI technology, especially when various computational techniques can be readily applied.

One fundamental issue with the AI technology is producing inaccurate output, with random, sporadic errors or, more damagingly, systemic deviations leading to bias. This article presents a systematic review of how computational techniques can be utilized to help mitigate such bias.  […]

Read the full article by Agatha Liu, Ph.D. 

The AI Update | September 18, 2023

#HelloWorld. In this issue, the Copyright Office asks all the right questions—but will it do something interesting with the answers? Microsoft and Adobe offer clever ideas of their own. And, surprise (not really): Two new lawsuits against AI developers. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

The Copyright Office has questions. Since the spring, the U.S. Copyright Office has devoted considerable effort to its AI Initiative, launching an AI webpage, holding four public listening sessions, and hosting educational webinars. In what it calls a “critical next step,” the Office on August 30 published a notice of inquiry asking for written comments (due October 18) on around 66 wide-ranging AI-related questions. Continue reading “The AI Update | September 18, 2023”

AI Data Privacy and Security Issues

A webinar replay is available below. For more information, please visit the event website.

The Future of Digital Art as Training Material For Generative Artificial Intelligence Models

Before ChatGPT and other artificial intelligence (AI) large language models exploded on the scene last fall, there were AI art generators, based on many of the same technologies. Simplifying, in the context of art generation, these technologies involve a company first setting up a software-based network loosely modeled on the brain with millions of artificial “neurons.” […] This article has two goals: to provide a reader-friendly introduction to the copyright and right-of-publicity issues raised by such AI model training, and to offer practical tips about what art owners can do, currently, if they want to keep their works away from such training uses. […]

Read the full Art Business News article. 

Safeguarding Companies’ Online Data in the AI Era

The rapidly evolving landscape of advanced technology renders data one of the most valuable commodities today. This is especially true for artificial intelligence (AI), which can advance significantly in capability and complexity by learning from massive data sets used as training data. …

[This article identifies] considerations companies should account for when undertaking efforts to protect their online data based on an analysis of legal protections applicable to companies’ online data against unauthorized use.

Read the full article by Agatha H. Liu and Ariel Seidner.

Can a Human Behind AI Be Creative?

The Copyright Registration Guidance (Guidance) published by the United States Copyright Office in March mainly addressed whether a human providing simple prompts or other input to an artificial intelligence (AI) algorithm could obtain a copyright registration for the output that the AI algorithm generated based on the human input. Working with AI algorithms all the time, I previously discussed whether the creator of the AI algorithm, and not the user, could obtain a copyright registration for that output. Now a few months later, a court has handed out a decision on whether to grant a copyright registration to the AI algorithm in Thaler v. Perlmutter, 1:22-cv-01564 (D.D.C).

That’s right. The court was confronted with the issue of whether to grant a copyright registration to the AI algorithm or the machine running the AI algorithm, rather than the creator of the AI algorithm. The plaintiff in this case has been a proponent of giving credit to machines running the plaintiff’s AI algorithms instead of the plaintiff directly, regardless of whether the AI algorithms output more algorithms or artworks. See Thaler v. Vidal, No. 21-2347 (Fed. Cir. 2022).

To support the position that the plaintiff’s machine should be granted a copyright registration, the plaintiff consistently represented in the copyright application that the AI algorithm generated the work “autonomously” and that the plaintiff played “no role” in the generation. This representation undermines any creative effort that the plaintiff may have made in producing the work. In general, while an AI algorithm once developed may be executed autonomously without human intervention, the AI algorithm was not developed in a vacuum and a human could have incorporated various creative elements into the AI algorithm, as discussed in my previous blog post.

Continue reading “Can a Human Behind AI Be Creative?”

The AI Update | August 29, 2023

#HelloWorld. In this issue, ChatGPT cannot be the next John Grisham, the secret is out on The New York Times’ frustrations with generative AI, and YouTube looks to a technological fix for voice replicas. Summer may soon be over, but AI issues are not going anywhere. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

AI cannot be a copyright author—for now. In one of the most-awaited copyright events of the summer (not Barbie-related), the federal district court in D.C. held that an AI system could not be deemed the author of a synthetically-generated artwork. This was a test case brought by Stephen Thaler, a computer scientist and passionate advocate for treating AI as both copyright author and patent inventor, notwithstanding its silicon- and software-based essence. The D.C. district court, however, held firm to the policy position taken by the U.S. Copyright Office—copyright protects humans alone. In the words of the court: “human authorship is an essential part of a valid copyright claim.” Those who have followed Thaler’s efforts will remember that, about a year ago, the Federal Circuit similarly rejected Thaler’s attempt to list an AI model as an “inventor” on a patent application, holding instead that an inventor must be a “natural person.” Continue reading “The AI Update | August 29, 2023”

AI Software Settlement Highlights Risk in Hiring Decisions

In Equal Employment Opportunity Commission v. ITutorGroup, Inc., et al., No. 1:22-CV-2565 (E.D.N.Y. Aug. 9, 2023), the EEOC and a tutoring company filed a Joint Settlement Agreement and Consent Decree in the U.S. District Court for the Eastern District of New York, memorializing a $365,000 settlement for claims involving hiring software that automatically rejected applicants based on their age. This is first EEOC settlement involving artificial intelligence (“AI”) software bias.

Read more on the Class Action Defense Blog.

The AI Update | August 10, 2023

#HelloWorld. In this issue, the state of state AI laws (disclaimer: not our original phrase, although we wish it were). Deals for training data are in the works. And striking actors have made public their AI-related proposals—careful about those “Digital Replicas.” It’s August, but we’re not stopping. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

States continue to pass and propose AI bills. Sometimes you benefit from the keen, comprehensive efforts of others. In the second issue of The AI Update, we summarized state efforts to legislate in the AI space. Now, a dedicated team at EPIC, the Electronic Privacy Information Center, spent all summer assembling an update, “The State of State AI Laws: 2023,” a master(ful) list of all state laws enacted and bills proposed touching on AI. We highly recommend reading their easy-to-navigate online site, highlights below:

Continue reading “The AI Update | August 10, 2023”

AI Tools in the Workplace and the Americans with Disabilities Act

On July 26, 2023, the EEOC issued a new Guidance entitled “Visual Disabilities in the Workplace and the Americans with Disabilities Act” (the “Guidance”).  This document is an excellent resource for employers, and provides insight into how to handle situations that may arise with job applicants and employees that have visual disabilities. Notably, for employers that use algorithms or artificial intelligence (“AI”) as a decision-making tool, the Guidance makes clear that employers have an obligation to make reasonable accommodations for applicants or employees with visual disabilities who request them in connection with these technologies.

Read more on the Class Action Defense Blog.

 

© 2009-2025 Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress