All Eyes on AI Discrimination Suit

In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80), Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.

Read the full post on the Duane Morris Class Action Defense Blog.

4 Takeaways As Hiring Bias Suit Over Workday AI Proceeds

A closely watched discrimination lawsuit over software provider Workday’s artificial intelligence-powered hiring tools is headed into discovery after a California federal court ruled the company may be subject to federal antidiscrimination laws if its products make decisions on candidates. […]

Alex W. Karasik, a management-side attorney who is a partner at Duane Morris LLP and a member of the firm’s workplace class action group, said companies using or selling workplace-related AI tools need to track the Workday proceedings closely.

“This is definitely a case to watch, as it’s a landmark case involving the use of artificial intelligence and the hiring process,” he said. “Both employers and technology vendors, particularly those involved with artificial intelligence or algorithmic decision-making tools, absolutely need to pay attention to this case.”

He said [the] decision sets out critical guidelines for courts’ evaluations of who may be on the hook when a vendor of AI-based hiring tools faces allegations that its product churns out biased results. […]

Read the full article on the Law360 website (subscription may be required).

Getting Ahead of New Risks in Commercial Transactions in the Age of AI

The pervasiveness of artificial intelligence (AI) is transforming the commercial transactions landscape. Providers across industries are looking to utilize third-party AI tools, or utilize customer data to train AI models, in connection with providing services or implementing use cases proposed by their customers to create efficiencies and cost savings. The intellectual property (IP) stakes are heightened, and parties on either side of a transaction will need to carefully leverage agreements to maintain IP rights in their own data, secure IP rights in resulting products, and protect themselves against claims of infringement.

Read the full Landslide article by Duane Morris’ Ariel Seidner.  (ABA membership required.)

Navigating the Use of AI Tools in Legal Practice Before Pa.’s Federal District Courts

By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten. But in the past year, attorneys have become more attuned to the potential and risks of artificial intelligence (AI). Last June, lawyers in New York made headlines after relying on a chatbot’s research skills, leading to sanctions for unknowingly submitting fictitious caselaw. One journalist even found himself in a love triangle with a chatbot bent on ending his marriage. In spite of these cautionary tales, the use of AI in the legal profession is on the rise as trusted legal research services like LexisNexis and Westlaw roll out AI-assisted research functions and major tech companies integrate AI into their products.

Read The Legal Intelligencer article by Rachel Good on the Duane Morris website.

Litigation Implications of Using AI Tools in Your Business

Artificial intelligence use cases are expanding at a rapid rate, and the pressure is mounting for businesses to leverage that technology or risk being left behind by their competitors. In addition to open-source applications, businesses are using enterprise-specific tools that enable employees to use generative AI technology at work. This includes licensed versions of the open-source models or business-specific tools developed alongside the applications the business is already using.

Read the article by Sarah O’Laughlin Kulik on the Duane Morris website.

Artificial . . . evidence?

Earlier this spring, A Washington State Court Judge issued what is widely believed to be the first evidentiary decision regarding Artificial Intelligence. In ­­Washington v. Puloka, following a Frye hearing, the Judge excluded AI-enhanced video from being considered as evidence. The video originated from Snapchat, and was enhanced using Topaz Labs AI Video, which is a commercially available software program widely used in the cinematography community. The Judge was not persuaded by this widespread commercial adoption, and held that the relevant community for purposes of Frye was the forensic video analysis community – which had not accepted the use Topaz AI.

The opinion shows careful consideration of an issue of first impression. Notably, it was important to the Judge’s opinion that there was another version of the video (the original) that was available and usable – even if it was low resolution, with motion blur. Further, the expert who edited the video did not know the details of how the Topaz Labs AI program worked – that is, he was not sure whether it was generative AI, could not testify to the reliability of the program, and did not know what datasets it was trained on. A different result may prevail where there is no other alternative, and in a situation where there is more testimony regarding the operation of the AI system at issue.

These issues will continue to pop up in courts across the country, and may need to be dealt with in a systematic way to ensure greater consistency. For example, the Advisory Committee on Evidence Rules has been considering proposed amendments to Rules 901 and 702 that would directly address AI-generated evidence.

The AI Update | May 23, 2024

#HelloWorld. Summer days are almost here. In this issue, we dive into the new Colorado AI Act, explore the impact of AI technologies on search providers’ liability shields, and track a U.S. district court’s strict scrutiny of anti-web-scraping terms of use. We finish by recapping a spirited test match on AI policy across the pond. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | May 23, 2024”

When Does Use of AI Set Off an Alarm in the Invention Process?

As generative AI is increasingly used to process information and generate new content, one possible application is to create an alternative embodiment in a patent application.  This could happen when an inventor creates an original embodiment, and then instructs an AI system to create a variant of the original embodiment to achieve broad coverage.  Conceivably, the AI system is configured to create an alternative embodiment based on existing data used to train the AI system or additional information that can introduce changes to the original embodiment, such as prior art in the field.  Would such use of AI be an innocent act or should it trigger an alarm like certain other uses of AI?

Continue reading “When Does Use of AI Set Off an Alarm in the Invention Process?”

Exploring Legal Risks: AI’s Role in Employment Discrimination Cases

Duane Morris Takeaway: Artificial intelligence took the employment world by storm in 2023, quickly becoming one of the most talked about and debated subjects among corporate counsel across the country. Companies will continue to use AI as a resource to enhance decision-making processes for the foreseeable future as these technologies evolve and take shape in a myriad of employment functions. As these processes are fine-tuned, those who seek to harness the power of AI must be aware of the risks associated with its use. This featured article analyzes two novel AI lawsuits and highlights recent governmental guidance related to AI use. As the impact of AI is still developing, companies should recognize the types of claims apt to be brought for use of AI screening tools in the employment context and the implications of possible discriminatory conduct stemming from these tools.

In the Spring 2024 issue of the Journal of Emerging Issues in Litigation, Duane Morris partners Jerry Maatman and Alex Karasik and associate George Schaller analyze key developments in litigation and enforcement shaping the impact of artificial intelligence in the workplace and its subsequent legal risks. Read the full featured article here.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress