#HelloWorld. Summer days are almost here. In this issue, we dive into the new Colorado AI Act, explore the impact of AI technologies on search providers’ liability shields, and track a U.S. district court’s strict scrutiny of anti-web-scraping terms of use. We finish by recapping a spirited test match on AI policy across the pond. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)
Colorado AI Act. These days, states seem to be driving much of the action on AI legislation. In the most recent development, Colorado just passed SB 24-205 “Concerning Consumer Protections in Interactions with Artificial Intelligence.” It was signed into law with gubernatorial reservations after the Connecticut bill on which it was modeled failed due to concerns about stifling innovation. The law takes effect on February 1, 2026, and will be enforced by the state attorney general.
Following a framework reminiscent of earlier state laws and the EU AI Act, Colorado’s new law sets out requirements for developers and deployers of “high-risk artificial intelligence systems”—those impacting “consequential decisions” like education enrollment, employment, lending, essential government services, health-care services, housing, insurance, or legal services.
Under the law, developers’ and deployers’ principal obligation is to “use reasonable care to protect consumers from any known and reasonably foreseeable risks of algorithmic discrimination.” (Although, for customer-facing chatbots, having a user policy forbidding discrimination may suffice.)
Additionally, developers must (a) provide a high-level summary of data types used to train the AI system; and (b) share how the system was evaluated for performance and to mitigate discrimination. Deployers must (a) conduct impact assessments at least annually and following any major system modification; (b) notify consumers when an AI system is being used; and (c) provide consumers with information about the system, an opportunity to correct their personal data, and a chance to appeal adverse consequential decisions.
Not-so-safe harbors? “AI Overviews”—webpage summaries generated by LLMs in response to web search queries—are coming soon to fabled search engines near you. Traditionally, these kinds of online intermediary platforms were able to benefit from safe harbors like Section 230 of the Communications Decency Act and Section 512(a) of the Digital Millennium Copyright Act. These legal shields, roughly speaking, protect online platforms against tort and copyright claims when the platforms merely pass through content provided by third parties (and comply with various other conditions).
But what to do when AI-generated summaries leave out citations and hyperlinks to the original sources? Can these online safe harbors still apply then? While the law remains open, a recent Washington Post article provides an early exploration of the issue. One strand of emerging thinking signals significant risk that these legal shields may be lost. By leaving out attribution information, AI summaries can obscure the third-party content from which the answers were derived. So what can be done? Perhaps search engines will deliberately build back hyperlinks and attribution in future iterations of AI summaries. Or perhaps, per the Post, they’ll frame the generated summaries as “mere suggestions rather than actionable information.” Internet law aficionados, watch this space closely.
Public data scraping, terms of use, and copyright preemption. On the subject of websites: Since the Ninth Circuit’s 2019 decision hiQ v. LinkedIn, there’s been a heuristic of sorts around scraping the web. It is permitted for data designated as public (think your public-facing social media account), not behind a paywall, so long as the scraper isn’t on actual notice of anti-scraping clauses in binding, enforceable website terms of use. On May 9, however, the Northern District of California took a turn down a different road.
In X Corp. v. Bright Data Ltd., No. C 23-03698 WHA, Dkt. No. 83 (N.D. Cal. May 9, 2024), the former Twitter sued Bright Data, alleging among other things, that its scraping of public posts violated X’s Terms of Service, including a specific clause stating that “scraping the Services in any form, for any purpose without our prior written consent is expressly prohibited.” Straightforward contract breach, right? No so fast. The district court held that copyright law pre-empted the claim. Why? Because those same Terms of Service gave X only a nonexclusive license to users’ posts, not ownership over that data. Why does that matter? Because copyright law permits only exclusive owners to enforce rights in their works against third-party copiers—and X’s anti-scraping contract clause would seem to confer those classic ownership rights on nonexclusive licensees. So? According to the court, this creates a “private copyright system” that “would yank into the private domain and hold for sale information open to all, exercising a copyright owner’s right to exclude where it has no such right.”
The upshot for AI: With high-quality online data only increasing in value for model training purposes, site owners that rely on public content postings would do well to start marshalling legal strategies for defending their anti-scraping service terms of use.
What we’re reading. The U.K. House of Lords is keeping the pressure on. Back in February, its Select Committee on Communications and Digital published a report exhorting the U.K. government to address certain generative AI risks, including copyright fairness. The government responded in April, but the Committee just keeps on pushing. On May 2, it published a letter urging the government to go further and faster, in three pages of no punches pulled. “The government’s record on copyright is inadequate and deteriorating.” Oof.
What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.
Editor-in-Chief: Alex Goranin
Deputy Editors: Matt Mousley and Tyler Marandola
If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.