#HelloWorld. We originally thought this edition would focus on OpenAI’s attempts to self-regulate GPT usage, but the European Union had other plans for us. This past Thursday, news broke of an agreement to add generative AI tools to the AI Act, the EU’s centerpiece AI legislation. So today’s issue starts there, before discussing OpenAI’s and others’ recent announcements regarding training data access and usage. Let’s stay smart together. (Subscribe to the mailing list to receive future issues).
The EU’s Artificial Intelligence Act: The EU has been debating a proposed AI Act since 2018. In 2021, it published a legislative framework that would classify AI products into one of four categories: unacceptable risk (and therefore forbidden); high risk (and therefore subject to regular risk assessments, independent testing, transparency disclosures, and strict data governance requirements); limited risk; and minimal risk. But this approach was developed before so-called “foundation models”—LLMs like ChatGPT and image generators like DALL-E and MidJourney—exploded into the public consciousness. So questions remained about whether the AI Act would be adjusted to accommodate this new reality.