U.S. And U.K. Cybersecurity Agencies Announce International Agreement Addressing AI Safety
Duane Morris Takeaway: This week’s episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and special counsel Brandon Spurlock with their discussion of the latest developments on the regulatory front of artificial intelligence.
Check out today’s episode and subscribe to our show from your preferred podcast platform: Spotify, Amazon Music, Apple Podcasts, Google Podcasts, the Samsung Podcasts app, Podcast Index, Tune In, Listen Notes, iHeartRadio, Deezer, YouTube or our RSS feed.
Episode Transcript
Jerry Maatman: Hello, loyal blog readers! Welcome to the Class Action Weekly Wire. Today our guest is my colleague, Brandon Spurlock.
Brandon Spurlock: Hey Jerry, it’s great to be here. Thanks.
Jerry: Today, we’re talking about the most recent developments on a global basis for regulatory endeavors insofar as artificial intelligence is concerned. I know that, Brandon, you’re a thought leader in that space, so wanted to get your feedback on what corporations should know about the global move towards regulation of artificial intelligence.
Brandon: Absolutely, Jerry. Well, this agreement was unveiled to the public just this past weekend – November 26 to be exact. It’s titled “Guidelines for Secure AI System Development.” This initiative was led by the U.K.’s National Cyber Security Centre, and it was developed in conjunction with the U.S.’ Cybersecurity and Infrastructure Security Agency. These guidelines focus on how to keep artificial intelligence safe from rogue actors. The U.S., Britain, Germany, are among 18 countries that signed on to the new guidelines laid out in this 20-page document. Now, this is a non-binding agreement that lays out general recommendations, such as monitoring AI systems for abuse, elevating data protection and vetting software suppliers. One thing to note is that the framework does not address the challenging questions around data sources for AI models or appropriate use of AI tools.
Jerry: Well it certainly seems to be a milestone on the road to regulation of AI from a comparative standpoint. Where is the United States when it comes to regulation of artificial intelligence, as compared to other countries or major jurisdictions?
Brandon: Really good question, Jerry. Many countries are putting their resources together, as well as independently positioning themselves to demonstrate leadership when it comes to embracing AI – while also cautioning its security, privacy, and market risk. So countries like France, Germany, Italy – they recently reached an agreement on how artificial intelligence regulations should be structured around “mandatory self-regulation through codes of conduct.” So what does this mean? It’s focused on how these AI systems are designed to produce a broad range of outputs. The European Commission, the European Parliament, and the EU Council are negotiating how the bloc should position itself on this particular topic.
Even last month, when we examined President Biden’s executive order on artificial intelligence, that publication from the White House further provides businesses with the in-depth roadmap of how the U.S. federal government’s regulatory goals regarding AI are developing.
Jerry: The evolution of artificial intelligence is certainly uppermost in the mind of most corporate counsel, and its impact on litigation – and in particular, the class action world – is real and palpable and with us. So thank you for your thoughts and analysis, Brandon, and we’ll see you next week on the Class Action Weekly Wire.
Brandon: Thanks, Jerry.