Employment Legislation in Illinois Regulates BIPA and AI

In the span of 10 days in August 2024, Illinois Governor J.B. Pritzker signed into law a series of significant employment legislation, paving the way for a new employment landscape beginning in 2025 and 2026. The new legislation includes:

    • Adding new requirements for employers utilizing artificial intelligence in their decision-making processes, and imposing liability under the Illinois Human Rights Act if those AI systems create a discriminatory effect;
    • Passing long-awaited reforms to the Biometric Information Privacy Act  that limit the number of violations an individual may accumulate under the law

Read the full Alert on the Duane Morris website.

Next up for Medtech: Being Generative in Domain-Specific Languages

Given the vast amounts of data available, including raw measurements, diagnostic information, treatment plans, and regulatory guidelines, the biomedical technologies sector stands to gain immensely from artificial intelligence (AI), particularly machine learning (ML).

ML, at its core, learns from training datasets to identify patterns, which can then be applied to new input data to make direct inferences. For instance, if specific body scans frequently result in a particular diagnosis, ML can be used to quickly provide that diagnosis when similar scans are encountered, thus aiding in disease diagnosis.

Read the full article by Duane Morris partner Agatha H. Liu, PhD on the MD+DI website

Artificial Intelligence Employment Law Enacted in Illinois

On August 9, 2024, Illinois enacted its landmark artificial intelligence employment law, HB 3773. This legislation, which amends the Illinois Human Rights Act, endeavors to prevent discriminatory consequences of using AI in employment decision-making processes. This law goes into effect on January 1, 2026. Illinois is one of 34 states that have either enacted or proposed laws regulating the use of artificial intelligence. Read the full Alert on the Duane Morris website.

Changes to Illinois Biometric Data Law Lower Liability, but the Stakes Remain High

In recent years, a heavy question mark has weighed on companies that process biometric information as part of their standard operating procedures: What is our risk exposure?  On August 2, 2024, Illinois Governor J.B. Pritzker signed into law a bill passed by the Illinois Legislature in May to amend BIPA in a way that is expected to limit the risk exposure associated with violations. The amended text of BIPA now indicates that violations essentially occur on a per-person basis, not a per-scan basis. This is expected to yield a marked decrease in the number of violations for which a company may be liable, though penalties of up to $5,000 may still add up quickly where thousands of individuals or more are implicated. Read the full Alert on the Duane Morris website.

Embracing Artificial Intelligence in the Energy Industry

Last year, President Joe Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Since the issuance of the executive order, a lot of attention has been focused on the provision requiring “the head of each agency with relevant regulatory authority over critical infrastructure … to assess potential risks related to the use of AI in critical infrastructure sectors involved, … and to consider ways to mitigate these vulnerabilities.” Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy highlights.

Read The Legal Intelligencer article by Duane Morris partners Phil Cha and Brian H. Pandya

Artificial Intelligence Employment Discrimination Lawsuit Proceeds

In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. July 12, 2024) (ECF No. 80), Judge Rita F. Lin of the U.S. District Court for the Northern District of California granted in part and denied in part Workday’s Motion to Dismiss Plaintiff’s Amended Complaint concerning allegations that Workday’s algorithm-based screening tools discriminated against applicants on the basis of race, age, and disability. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions. For employers utilizing artificial intelligence in their hiring practices, tracking the developments in this cutting-edge case is paramount.  This ruling illustrates that employment screening vendors who utilize AI software may potentially be liable for discrimination claims as agents of employers.

Read the full post on the Duane Morris Class Action Defense Blog.

Suit Involving Artificial Intelligence-Powered Hiring Tools Heads to Discovery

A closely watched discrimination lawsuit over software provider Workday’s artificial intelligence-powered hiring tools is headed into discovery after a California federal court ruled the company may be subject to federal antidiscrimination laws if its products make decisions on candidates. […]

Alex W. Karasik, a management-side attorney who is a partner at Duane Morris LLP and a member of the firm’s workplace class action group, said companies using or selling workplace-related AI tools need to track the Workday proceedings closely.

“This is definitely a case to watch, as it’s a landmark case involving the use of artificial intelligence and the hiring process,” he said. “Both employers and technology vendors, particularly those involved with artificial intelligence or algorithmic decision-making tools, absolutely need to pay attention to this case.”

He said [the] decision sets out critical guidelines for courts’ evaluations of who may be on the hook when a vendor of AI-based hiring tools faces allegations that its product churns out biased results. […]

Read the full article on the Law360 website (subscription may be required).

The Age of Artificial Intelligence and Commercial Transactions

The pervasiveness of artificial intelligence (AI) is transforming the commercial transactions landscape. Providers across industries are looking to utilize third-party AI tools, or utilize customer data to train AI models, in connection with providing services or implementing use cases proposed by their customers to create efficiencies and cost savings. The intellectual property (IP) stakes are heightened, and parties on either side of a transaction will need to carefully leverage agreements to maintain IP rights in their own data, secure IP rights in resulting products, and protect themselves against claims of infringement.

Read the full Landslide article by Duane Morris’ Ariel Seidner.  (ABA membership required.)

The Use of Artificial Intelligence Tools Before Pennsylvania Courts

By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten. But in the past year, attorneys have become more attuned to the potential and risks of artificial intelligence (AI). Last June, lawyers in New York made headlines after relying on a chatbot’s research skills, leading to sanctions for unknowingly submitting fictitious caselaw. One journalist even found himself in a love triangle with a chatbot bent on ending his marriage. In spite of these cautionary tales, the use of AI in the legal profession is on the rise as trusted legal research services like LexisNexis and Westlaw roll out AI-assisted research functions and major tech companies integrate AI into their products.

Read The Legal Intelligencer article by Rachel Good on the Duane Morris website.

Colorado Privacy Act’s Universal Opt-Out Provision Goes Into Effect July 1, 2024

While the Colorado Privacy Act (CPA) has already been in effect, as of July 1, 2024, companies that meet the threshold compliance criteria for CPA and that engage in the processing of personal data for purposes of targeted advertising or the sale of personal data (“covered entities”) must implement a universal opt-out mechanism, which allows users to more easily exercise their opt-out rights with these covered entities. Specifically, a universal opt-out mechanism allows a user to configure their internet browser settings, and as a result, the websites the user visits from that browser automatically receive the user’s opt-out signal. As of July 1, 2024, covered entities must recognize and honor a user’s opt-out preferences where communicated through a universal opt-out mechanism.

Read the full Alert  on the Duane Morris LLP website.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress