What Should GenAI Not Do in Healthcare?

With the advent of generative AI models like Med-PaLM and ChatGPT, providers can now type complex medical questions into a chat box and receive sophisticated (and hopefully accurate) answers. This ability surpasses previous AI applications in the potential to serve patients, but also in the potential to run afoul of laws like corporate practice of medicine (CPOM) rules, the False Claims Act (FCA), and FDA regulations. These concerns — on top of the risk of a generative AI model fabricating answers, known as “hallucinations” — mean that providers should proceed with extreme caution before implementing generative AI tools into their practices.

Read the full article by Matthew Mousley on the Wharton Healthcare Quarterly website.

Decoding and Leveraging AI Regulations for Beauty Sector in US and EU

Duane Morris’ Agatha Liu and Kelly Bonner were interviewed by Personal Care Insights about the challenges and opportunities beauty companies face while using AI to appeal to younger consumer demographics. Below is an excerpt of the article.

How does the competitive landscape of the beauty industry impact businesses’ use of AI technologies, especially when it comes to targeting younger consumer segments?
Bonner
: The highly competitive nature of the beauty industry, with its desire to appeal to younger consumers, is certainly a key driver in beauty brands embracing AI tools to offer enhanced customer shopping experiences.

Can you provide some context about US AI regulations that the beauty industry should know? What do you expect is coming, especially considering the AI Act in the EU?

Liu: The EU AI Act imposes specific obligations on the providers and deployers of so-called high-risk AI systems, including testing, documentation, transparency and notification duties.

To read the full interview, please visit the Personal Care Insights page.

AI Implementation Risks in the Beauty Industry

Duane Morris partner Agatha Liu spoke with Personal Care Insights on potential risks, including personalization, appearance bias and regulatory compliance, as beauty companies integrate AI technologies.

How do you perceive the potential risks associated with integrating AI technologies to enhance customer experiences in the beauty industry?
Liu: In the beauty context, it’s important for companies to be aware of potential pitfalls in integrating AI technologies like virtual try-on technology (VTO), automated product or service applications or chatbots that act as virtual assistants and offer real-time, responsive product recommendations. These risks can include a lack of accuracy, lack of propriety (possibly giving offense), invasion of consumer privacy or possible IP infringement.

Read the full interview on the Personal Care Insights website. 

Senate Democrats Introduce Bill to Scrutinize Price-Fixing Algorithms

Several Democratic senators introduced a bill intended to stop companies from utilizing predictive technology to raise prices. Businesses are increasingly delegating important competitive decisions, including price-setting power, to artificial intelligence, algorithms, and other predictive technology software. The new bill, titled Preventing Algorithmic Collusion Act, is intended to ensure that such conduct by direct competitors to raise prices does not avoid scrutiny under the antitrust laws. The proposed bill includes several important aspects. First, it would presume a price-fixing agreement exists whenever direct competitors raise prices by sharing competitively sensitive information through pricing algorithm software. Second, it would require businesses to disclose the use of algorithms in setting prices and allow antitrust enforcers to audit the algorithm. Third, it would prohibit companies from using competitively sensitive information from direct competitors in developing a pricing algorithm, and fourth, it directs the FTC to study the impact on competition from pricing algorithms. Businesses utilizing technology to help with pricing and other competitive decisions should monitor these enforcement efforts.

AI Updates at Legalweek

Privacy and data breach class action litigation, as well as AI issues, are among the key issues that keep businesses and corporate counsel up at night. There was over $1 billion procured in settlements and jury verdicts over the last year for these types of “bet-the-company” cases.  At the ALM Law.com Legalweek 2024 conference in New York City, Duane Morris partner Alex W. Karasik was a panelist at the session “Trends in US Data Privacy Laws and Enforcement.” The conference, which had over 6,000 attendees, produced excellent dialogues on how cutting-edge technologies can potentially lead to class action litigation.

Read more on the Duane Morris Class Action Defense Blog.

Calif. Court Dismisses AI Suit Involving Discrimination

In Mobley v. Workday, Inc., Case No. 23-CV-770 (N.D. Cal. Jan 19, 2024) (ECF No. 45), Judge Rita F. Lin of the U.S. District Court for the Northern District of California dismissed a lawsuit against Workday involving allegations that algorithm-based applicant screening tools discriminated applicants on the basis of race, age, and disability. With businesses more frequently relying on artificial intelligence to perform recruiting and hiring functions, this ruling is helpful for companies facing algorithm-based discrimination lawsuits in terms of potential strategies to attack such claims at the pleading stage.

Read more on the Duane Morris Class Action Defense Blog.

AI-Related Healthcare Fraud on DOJ’s Radar

Artificial intelligence (AI) can enhance efficiencies in providing healthcare in many ways, one of which is by utilizing algorithms to read medical records and thereby assist providers in better understanding their patients and treatments that may be available. Increasingly, electronic medical review (EMR) software companies are utilizing AI to boost their products, offering hospitals, healthcare facilities, and physicians powerful tools that can enhance their decision-making as to operations and treatment.

Read more on the Duane Morris Health Law Blog.

The AI Update | January 26, 2024

#HelloWorld. January has not been especially frantic on the legal-developments-in-AI front. Yes, we know the anticipated final text of the EU AI Act was published unofficially, but the final vote hasn’t happened yet, so we’re biding time for now. Meanwhile, in this issue, we check in with state bar associations, SAG-AFTRA, and the FTC. They have things to say about AI policy too, so we’ll listen. Let’s stay smart together.  (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | January 26, 2024”

FTC launches GenAI investigation

The Federal Trade Commission announced today that it has begun an investigation into Generative AI investments and partnerships. The FTC is using its investigative power pursuant to Section 6(b) of the FTC act which allows the FTC to issue compulsory process (similar to a subpoena or Civil Investigative Demand) to learn information about an organization, without a specific law-enforcement purpose. Historically, the FTC has used its 6(b) power to conduct studies regarding particular industries or practices that may inform future agency positions or enforcement priorities.  The investigation announced today is a concrete fact-gathering step by the FTC regarding the regulation of Generative AI.

What does herring fishing have to do with AI?

Herring fishing – of all things – could have a big impact on AI regulation in 2024. That is, cases brought by two herring fishing companies are before the Supreme Court that could have wide-reaching influence. The cases challenge actions taken by the National Marine Fisheries Service and longstanding Chevron deference. Under Chevron, courts afford deference to reasonable agency interpretations of ambiguous laws. At oral argument last week, the Court signaled a willingness to overturn Chevron deference. This is notable for the Artificial Intelligence space that lacks explicit legislation from Congress.  Indeed, the Executive Order on Artificial Intelligence last year is largely directed at Federal Agencies, instructing the agencies to take action. In the absence of Chevron deference, actions taken by agencies pursuant to that order could be more susceptible to legal challenge.  Justice Kagan even called out AI in oral argument as an area that could see effects from the Court’s ruling. The Supreme Court is expected to rule by the end of June.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress