Duane Morris Takeaway: This week’s episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and associate Tiffany Alberty with their discussion of a significant development on the forefront of artificial intelligence legislation – a Colorado bill recently signed into law making strides to curb the risk of algorithmic bias across all sectors and uses of AI technology.
Check out today’s episode and subscribe to our show from your preferred podcast platform: Spotify, Amazon Music, Apple Podcasts, Samsung Podcasts, Podcast Index, Tune In, Listen Notes, iHeartRadio, Deezer, and YouTube.
Episode Transcript
Jerry Maatman: Thank you, loyal blog readers. Welcome to our next installment of our weekly podcast series, the Class Action Weekly Wire. I’m Jerry Maatman, a partner at Duane Morris and joining me today is my colleague, Tiffany Alberty. Welcome.
Tiffany Alberty: Thanks, Jerry, excited to be here.
Jerry: Today we wanted to discuss what I believe to be a landmark development coming out of the state of Colorado regarding artificial intelligence legislation, and specifically the new AI bill that was signed into law earlier this year. As a member both of the Illinois and Colorado bars, Tiffany, I know you’ve been advising employers on this – wondered what your takeaways were at a 100,000 foot level on this new law?
Tiffany: Sure. Thanks, Jerry, I appreciate the opportunity to speak today. So, as many of you know, on May 17th of this year, Colorado Governor Jared Polis signed into law SB-205, also known as the Consumer Protections for Interactions with Artificial Intelligence Systems. It does take effect in February of 2026, and it applies to Colorado residents. This bill was modeled after Connecticut’s ambitious legislation which ended up crumbling the same month due to the Connecticut Governor Ned Lamont’s concerns that it would stifle the innovation of the developing AI industry. So, comparing this legislation to AI laws such as in Florida or Utah. The statute is really the first legislation of its kind in the United States that focuses on what’s called “high-risk artificial intelligence systems”. Notably, it requires that developers and companies that deploy this high-risk AI technology use the standard of reasonable care to prevent algorithmic discrimination.
Jerry: Thanks for that overview, Tiffany, that’s very helpful. In terms of what corporate counsel need to understand about the concept of “high-risk AI systems,” how would you describe that in layman’s terms, and with respect to the range of activities or software covered by the new Colorado law?
Tiffany: Sure. So, the Colorado law defines “high-risk AI systems” as those that make or substantially contribute to making “consequential decisions.” Of course, it’s not clear, but some examples that would be considered as “consequential decisions” under the law include a large range of companies and services, including education enrollment or education opportunities, employment or employment services and opportunities, financing or lending services, essential governmental services, healthcare services, housing, insurance, and then, of course, legal services.
The law does actually carve out specific systems that would not be included in the law – that either (i) perform narrow procedural tasks; or (ii) detect decision making patterns or deviations from prior decision-making patterns, and that aren’t intended to replace or influence the human component of assessment and review. Also excluded from the law is AI-enabled video games, cybersecurity software, anti-malware or anti-virus software, spam or robocalling features and filters – all when they’re not considered a “substantial factor” in making these consequential decisions
Going to what a ”substantial factor” is – it’s defined as a factor that (i) assists in making consequential decisions and (ii) is capable of altering the overall outcome of that said consequential decision, or (iii) is generated by an AI system alone.
Jerry: Well on its face, that sounds quite broad, and I doubt that the exemptions are going to be used to swallow the rule. What do corporate counsel need to know about penalties and potential damages under the statute for violations of it?
Tiffany: Sure, so the penalties are hefty. The law provides the Colorado Attorney General with the exclusive authority to enforce violations and penalties up to $20,000 for each consumer or transaction violation that’s involved. However, the law does not contain a private cause of action. Developers as well as deployers can assert an affirmative defense if they discover and cure the violation, or are in compliance with the latest version of the Artificial Intelligence Risk Management Framework that’s published by the National Institute of Standards and Technology, or otherwise known as NIST, or any other framework that is designated by the Colorado Attorney General that should come out with more specific and narrow confines.
Jerry: The job of a compliance counsel is certainly difficult with the patchwork quilt of privacy laws, but what would your advice be specifically for companies involved in trying to engage in good faith compliance with the Colorado law?
Tiffany: Sure, great question. There are key responsibilities at stake for both developers of AI technology and deployers, which are the companies that are utilizing these systems, in terms of protecting consumers and employees from the risks of algorithmic discrimination. For AI developers, there is a duty to avoid algorithmic discrimination, and under the reasonable care standard, it includes several critical steps. So that would be providing deployers with detailed information about the AI systems and the necessary documentation for impact assessments; developers must make a public statement about the types of AI systems that they have developed or substantially modified; and disclose any potential risks of algorithmic discrimination to known deployers and the Colorado Attorney General within 90 days of discovery.
So that’s going to be for the AI developer side. Now, if you go to the other variation which is going to be for deployers of high-risk AI systems, they, too, have a duty under the law to avoid algorithmic discrimination, and they are required to implement comprehensive risk management policies, conduct impact assessments throughout the year, and review their AI systems annually to ensure that there’s no algorithmic discrimination occurring. They also need to inform consumers about the system’s decision-making processes and offer opportunities for correcting any inaccurate information that’s being collected and allow for appeals against adverse decisions upon human review, if that is feasible. And then the last thing that is similar to the AI developer side – deployers must also disclose any algorithmic discrimination discovered to Colorado’s Attorney General within 90 days of discovery.
So, kind of taking more of a bird’s eye view, the law encompasses AI technology when it’s involved in the consequential decisions, such as in an employment context for hiring and firing. And it adds another layer of intervention to check the AI process, and ensuring that it doesn’t have any type of discriminatory or bias intent. As such, companies have until February 2026 to come into compliance with this new Colorado AI law.
Jerry: Well, thanks, Tiffany. Those are great insights. I think the bottom line is compliance just became a bit tougher in terms of all the things that are out there in that wild west which is the legal frontier of artificial intelligence. If there’s nothing other than what we’ve seen from the plaintiffs’ bar is that they’ve been very innovative and using statutes like this and cobbling together class actions involving employer use of artificial intelligence. Well, thank you loyal blog readers for tuning in to this week’s weekly podcast series – we will see you next week with another topic.
Tiffany: Thanks, everyone.