The AI Update | July 13, 2023

#HelloWorld. Pushback and disruption are the themes of this edition as we look at objections to proposed regulation in Europe, an FTC investigation, the growing movement in support of uncensored chatbots, and how AI is disrupting online advertising. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

Pushback against AI regulation. The AI Update has followed closely the progress of the European Union’s proposed AI Act. Today we report on pushback in the form of an open letter from representatives of companies that operate in Europe expressing “serious concerns” that the AI Act would “jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” The letter takes aim in particular at the proposed “high risk” treatment of generative AI models, worrying that “disproportionate compliance costs and disproportionate liability risks” will push companies out of Europe and harm the ability of the EU to be at the forefront of AI development. The ask from the signatories is that European legislation “confine itself to stating broad principles in a risk-based approach.” As we have explained, there is a long road and many negotiations ahead before any version of the AI Act becomes the law in Europe. So it remains to be seen whether any further revisions reflect these concerns.

FTC flexes its muscle. Putting aside future AI-specific regulations, the Federal Trade Commission reminded us this week that existing regulatory rules will apply to AI as well. The FTC sent a 20-page Civil Investigative Demand—in essence the FTC’s version of a subpoena—to OpenAI. The stated purpose? To obtain information for an investigation into whether OpenAI’s use of LLMs has “engaged in unfair or deceptive privacy or data security practices” or has “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm,” with the latter justification seeming to be an oblique reference to the potential for LLMs to engage in “hallucinations” in which they generate text responses that sound like facts, but are actually untrue. In the near-term, the FTC’s action will require OpenAI to deal with the FTC’s request for significant amounts of information about OpenAI’s practices, data security and privacy policies, model development and training, its “fine-tuning” practices, and its efforts to mitigate the risk of its products, among other categories. In the long term, the FTC’s investigation may prove to be an early data point on how regulators will go about addressing AI while we await AI-specific legislation.

Off the rails. As we’ve covered in prior issues, policy makers and the biggest mainstream players debating best practices for how to develop AI models responsibly largely agree that chatbots should be carefully designed not to output misleading, deceptive, offensive, or harmful information. But, there is a growing faction of smaller developers now arguing that chatbots should be uncensored, and the responsibility for how their output is used should be entirely on users. Users frustrated with mainstream offerings seem to agree. We don’t need an uncensored chatbot’s help to generate the parade of horribles for why this is a dangerous idea—we’ll just assume everyone has learned a lesson from how social media handles misinformation and hope for the best!

What we’re reading: With all the vague menace of AI eating the world lately, it was only a matter of time until it came for the Internet’s golden goose:  advertising.  According to a NewsGuard report, analysts found that at least 140 brands (including major banks, luxury department stores, sports apparel retailers, appliance manufacturers, consumer technology companies, and streaming services) were spending programmatic advertising budget on low-quality AI-generated news sites that operate with minimal (or zero) human supervision.  Given the volume of content that is being generated for these widely proliferating sites (an average of more than 1,200 articles per day per site), the struggle to place programmatic advertising effectively is hard and only getting harder.

What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.

Editor-in-Chief: Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

Subscribe to the mailing list to receive future issues.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress