The AI Update | May 23, 2024

#HelloWorld. Summer days are almost here. In this issue, we dive into the new Colorado AI Act, explore the impact of AI technologies on search providers’ liability shields, and track a U.S. district court’s strict scrutiny of anti-web-scraping terms of use. We finish by recapping a spirited test match on AI policy across the pond. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | May 23, 2024”

The AI Update | April 23, 2024

#HelloWorld. In this issue, we zoom in on the world of AI model training, looking at both dataset transparency and valuation news. Then we zoom out, highlighting Stanford’s helpful summary of 2023 AI regulations and hot-off-the-press ethical guidance on AI use for lawyers from the New York State Bar. It may be a grab bag, but it’s one worth grabbing. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | April 23, 2024”

The AI Update | January 26, 2024

#HelloWorld. January has not been especially frantic on the legal-developments-in-AI front. Yes, we know the anticipated final text of the EU AI Act was published unofficially, but the final vote hasn’t happened yet, so we’re biding time for now. Meanwhile, in this issue, we check in with state bar associations, SAG-AFTRA, and the FTC. They have things to say about AI policy too, so we’ll listen. Let’s stay smart together.  (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | January 26, 2024”

The AI Update | January 11, 2024

#HelloWorld. It’s 2024 and we… are…back. Lots to catch up on. AI legal developments worldwide show no signs of letting up, so here’s our New Year’s resolution: We’re redoubling efforts to serve concise, focused guidance of immediate use to the legal and business community. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | January 11, 2024”

The AI Update | November 8, 2023

#HelloWorld. The days are now shorter, but this issue is longer. President Biden’s October 30th Executive Order deserves no less. Plus, the UK AI Safety Summit warrants a drop-by, and three copyright and right-to-publicity theories come under a judicial microscope. Read on to catch up. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Continue reading “The AI Update | November 8, 2023”

The AI Update | October 5, 2023

#HelloWorld. Fall begins, and the Writers Guild strike ends. In this issue, we look at what that means for AI in Hollywood. We also run through a dizzying series of self-regulating steps AI tech players have undertaken. As the smell of pumpkin spice fills the air, let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

AI and the Writers Guild. Screenwriters ended a nearly five-month strike with a tentative new agreement good through mid-2026. The Minimum Basic Agreement (MBA) includes multiple AI-centric provisions. The AI-related highlights: Continue reading “The AI Update | October 5, 2023”

The AI Update | June 29, 2023

#HelloWorld. In the midst of summer, the pace of significant AI legal and regulatory news has mercifully slackened. With room to breathe, this issue points the lens in a different direction, at some of our persistent AI-related obsessions and recurrent themes. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Stanford is on top of the foundation model evaluation game. Dedicated readers may have picked up on our love of the Stanford Center for Research on Foundation Models. The Center’s 2021 paper, “On the Opportunities and Risks of Foundation Models,” is long, but it coined the term “foundation models” to cover the new transformer LLM and diffusion image generator architectures dominating the headlines. The paper exhaustively examines these models’ capabilities; underlying technologies; applications in medicine, law, and education; and potential social impacts. In a downpour of hype and speculation, the Center’s empirical, fact-forward thinking provides welcome shelter.

Now, like techno-Britney Spears, the Center has done it again. (The AI Update’s human writers can, like LLMs, generate dad jokes.) With the European Parliament’s mid-June adoption of the EU AI Act (setting the stage for further negotiation), researchers at the Center asked this question: To what extent would the current LLM and image-generation models be compliant with the EU AI Act’s proposed regulatory rules for foundation models, mainly set out in Article 28? The answer: None right now. But open-source start-up Hugging Face’s BLOOM model ranked highest under the Center’s scoring system, getting 36 out of 48 total possible points. The scores of Google’s PaLM 2, OpenAI’s GPT-4, Stability.ai’s Stable Diffusion, and Meta’s LLaMA models, in contrast, all hovered in the 20s.

Continue reading “The AI Update | June 29, 2023”

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress