All Eyes on AI Discrimination Suit

In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80), Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.

Read the full post on the Duane Morris Class Action Defense Blog.

4 Takeaways As Hiring Bias Suit Over Workday AI Proceeds

A closely watched discrimination lawsuit over software provider Workday’s artificial intelligence-powered hiring tools is headed into discovery after a California federal court ruled the company may be subject to federal antidiscrimination laws if its products make decisions on candidates. […]

Alex W. Karasik, a management-side attorney who is a partner at Duane Morris LLP and a member of the firm’s workplace class action group, said companies using or selling workplace-related AI tools need to track the Workday proceedings closely.

“This is definitely a case to watch, as it’s a landmark case involving the use of artificial intelligence and the hiring process,” he said. “Both employers and technology vendors, particularly those involved with artificial intelligence or algorithmic decision-making tools, absolutely need to pay attention to this case.”

He said [the] decision sets out critical guidelines for courts’ evaluations of who may be on the hook when a vendor of AI-based hiring tools faces allegations that its product churns out biased results. […]

Read the full article on the Law360 website (subscription may be required).

Getting Ahead of New Risks in Commercial Transactions in the Age of AI

The pervasiveness of artificial intelligence (AI) is transforming the commercial transactions landscape. Providers across industries are looking to utilize third-party AI tools, or utilize customer data to train AI models, in connection with providing services or implementing use cases proposed by their customers to create efficiencies and cost savings. The intellectual property (IP) stakes are heightened, and parties on either side of a transaction will need to carefully leverage agreements to maintain IP rights in their own data, secure IP rights in resulting products, and protect themselves against claims of infringement.

Read the full Landslide article by Duane Morris’ Ariel Seidner.  (ABA membership required.)

Webinar: Tech Sector Regulations, Developments and Trends in the U.S., U.K. and EU

Duane Morris’ Technology, Media and Telecom Industry Group will present a webinar, Tech Sector Sanctions, Export Controls and Foreign Investment Rules in the U.S., the U.K. and the EU, on Wednesday, April 24, 2024, at 12:00 p.m. Eastern time | 5:00 p.m. London time.

The speakers will discuss recent U.S. executive orders and national security directives on inbound and outbound investment, artificial intelligence and sensitive personal data, as well as other developments and trends. REGISTER FOR THE WEBINAR.

Mitigating AI Risks for Beauty Companies

Kelly Bonner and Agatha Liu of Duane Morris LLP shared their insights and experience with CosmeticsDesign on the risks of incorporating AI technology into business practices, and how can beauty companies protect themselves.

Common uses for AI in beauty & associated risks

One of the most common uses for AI technology is personalizing products and offering personalized product recommendations. “As beauty has become increasingly personalized,” Bonner explained, “companies are increasingly deploying AI technologies to enable customers to visualize new looks (virtual try-on tech) or communicate with customers via chatbots that act as virtual assistants and offer personalized product recommendations.”

Continue reading “Mitigating AI Risks for Beauty Companies”

AI Implementation Risks in the Beauty Industry

Duane Morris partner Agatha Liu spoke with Personal Care Insights on potential risks, including personalization, appearance bias and regulatory compliance, as beauty companies integrate AI technologies.

How do you perceive the potential risks associated with integrating AI technologies to enhance customer experiences in the beauty industry?
Liu: In the beauty context, it’s important for companies to be aware of potential pitfalls in integrating AI technologies like virtual try-on technology (VTO), automated product or service applications or chatbots that act as virtual assistants and offer real-time, responsive product recommendations. These risks can include a lack of accuracy, lack of propriety (possibly giving offense), invasion of consumer privacy or possible IP infringement.

Read the full interview on the Personal Care Insights website. 

FTC Staff Issues Reminders to AI Companies

Today, the Staff in the Office of Technology of the Federal Trade Commission (“FTC”) posted a reminder to AI companies, enumerating the ways that they can run afoul of the laws enforced by the FTC. In particular, FTC Staff called out Model-as-a-Service companies, and impressed the importance of safeguarding individual and proprietary data involved in creating the models. FTC Staff indicated that there could be both consumer protection and competition concerns associated with a failure to do so. Further, FTC Staff warned that AI companies need to be forthcoming in how data is being used, and companies that omit material facts that would affect whether customers buy a particular product or service may run afoul of competition laws.

 

Guidelines Discuss Role of Tech in Mergers

On December 18, 2023, the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) jointly issued new Merger Guidelines. The new guidelines amend, update and replace the numerous versions of merger guidelines previously issued by both agencies.

Big tech platforms will likely continue to be in the agencies’ crosshairs, including by looking back or forward at smaller acquisitions that may enhance or extend dominant positions based on technology platforms or arguably eliminate potential competition.

Read the full Alert on the Duane Morris website. 

New White House Executive Order Highlights Increased Complexity in AI Regulation – A Cross-Practice Overview

Section authors: Sandra A. Jeskie, Michelle Hon Donovan, Robert Carrillo, Ariel Seidner, Milagros Astesiano, Alex W. Karasik, Geoffrey M. Goodale, Neville M Bilimoria, Edward M. Cramp, Ted J. Chiappari, Kristopher Peters and M. Alejandra Vargas.

The White House’s October 30, 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“EO”) signals increased governmental regulation over the development and use of artificial intelligence (“AI”) models.  While the United States currently does not have a comprehensive AI regulation regime, many federal government agencies already regulate the use and development of AI through a complex framework of rules and regulations.  President Biden’s EO promises to add a new layer of complexity by introducing sweeping changes affecting a wide variety of industries.  Duane Morris’ multi-disciplinary team of AI attorneys are ready to help clients working with AI tools abreast of new regulations in this rapidly-evolving area of law.  Below, we summarize the most significant changes stemming from the White House’s most recent AI EO.

Continue reading “New White House Executive Order Highlights Increased Complexity in AI Regulation – A Cross-Practice Overview”

Executive Order on Use of AI

On October 30, 2023, President Biden signed an Executive Order (the “EO”) providing guidance for employers on the emerging utilization of artificial intelligence in the workplace.  The EO establishes industry standards for AI security, innovation, and safety across significant employment sectors. Spanning over 100 pages, the robust EO endeavors to set parameters for responsible AI use, seeking to harness AI for good while mitigating risks associated with AI usage.

Read more on the Duane Morris Class Action Defense Blog.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress