The AI Update | April 4, 2024

#HelloWorld. Spring has sprung. While the EU AI Act receives wall-to-wall coverage in other outlets, this issue highlights recent rules, warnings, and legislative enactments here in the U.S. And it ends on a personal, meditative note from an AI user, worth a read. Let’s stay smart together. (Subscribe to the mailing list to receive future issues.)

New rules for federal agency use of AI. Following up on the President’s October 2023 Executive Order addressing AI, on March 28, the Office of Management and Budget released a memorandum governing many aspects of AI use by federal agencies.

Within 60 days, each agency is required to appoint a “Chief AI Officer” (or CAIO, for acronym lovers). The CAIO will coordinate the agency’s procurement, use, and risk management for AI systems. Among other obligations, agencies will now have to:

    • “proactively share their custom-developed code—including models and model weights—for AI applications in active use,” prioritizing sharing of code with the “greatest potential” for re-use by the public; and
    • implement certain “minimum practices” by the year-end for any AI tools that are “safety-impacting” (because their outputs serve “as a principal basis” for decisions or actions affecting human life or critical resources) or “rights-impacting” (because their outputs serve as a principal basis for decisions affecting civil rights or access to government services).

These “minimum practices” include performing an AI impact assessment, testing and documenting results of AI use under real-world conditions, and maintaining adequate human oversight and opt-out processes for rights-impacting AI tools.

Utah enacts an AI statute. In mid-March, Utah’s governor signed into law a set of “Artificial Intelligence Amendments,” taking effect on May 1, 2024. One part of these amendments—the “Artificial Intelligence Policy Act”—does things like set up a state “Office of Artificial Intelligence Policy” and “Artificial Intelligence Learning Laboratory Program.”

The remaining part amends Utah’s consumer protection statutes to clarify that use of “generative artificial intelligence” is “not a defense” to the violation of those statutes. Private companies will not be able to escape liability for violations on the ground that a generative AI tool (rather than a human) made a statement or took an action that violates the state’s consumer protection statute. Two other interesting tidbits from this section:

    • To qualify as “generative artificial intelligence,” an AI system must have “limited or no human oversight.” So keep those humans in the loop (but make sure they themselves don’t violate consumer rights organically);
    • Companies and professionals using AI to interact with their consumers and clients must “prominently disclose” their use of generative AI. Which brings us back to a broken-record theme of our newsletter: if you’re using a generative AI tool, you might as well start disclosing that use and labeling your outputs now, since eventually laws in many jurisdictions will likely require you to do so.

U.S. government spending on AI increases. The Brookings Institution released another study in its series examining U.S. government investment in AI, based on an analysis of federal contracts using the term “artificial intelligence” in the description. The takeaway? Washington, D.C., is putting its money where its mouth is: There’s been a staggering 1,200% increase in the potential value of AI contracts, amounting to more than $4.2 billion, in just one year. Not surprisingly, the spending is dominated by the Department of Defense.

A word of warning about unfounded AI marketing claims. In public remarks Reuters reported on March 19, the U.S. Attorney for the Northern District of California cautioned that “AI is fertile ground for fraudsters to make false and exaggerated claims”—not just about technological capabilities, but about standard business metrics like customer and subscriber numbers and revenues. Thus the double-edged sword of all the current attention on AI: Prospects and investors are closely monitoring the space, but so too are the law enforcers.

What we’re reading. To close out this issue, we’ll leave you with this link to a considered essay penned by Joe Dworetzky, Bay Area journalist, cartoonist, and sometime AI Update reader. In the piece, Joe reflects on his own experiments with art-generating AI tools and the thoughts and feelings evoked along the way. There’s a lesson here, about mindfulness and deliberateness in our approach toward the gear we use.

What should we be following? Have suggestions for legal topics to cover in future editions? Please send them to AI-Update@duanemorris.com. We’d love to hear from you and continue the conversation.

Editor-in-Chief: Alex Goranin

Deputy Editors: Matt Mousley and Tyler Marandola

If you were forwarded this newsletter, subscribe to the mailing list to receive future issues.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress