{"id":170,"date":"2023-11-08T11:55:06","date_gmt":"2023-11-08T15:55:06","guid":{"rendered":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/?p=170"},"modified":"2024-02-21T11:12:52","modified_gmt":"2024-02-21T15:12:52","slug":"the-ai-update-november-8-2023","status":"publish","type":"post","link":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/2023\/11\/08\/the-ai-update-november-8-2023\/","title":{"rendered":"The AI Update | November 8, 2023"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-96 size-full\" src=\"http:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-content\/uploads\/sites\/63\/2023\/04\/DM-AI-Update-e1681141844877.png\" alt=\"\" width=\"150\" height=\"60\" \/><\/p>\n<p><em>#HelloWorld. The days are now shorter, but this issue is longer. President Biden\u2019s October 30<sup>th<\/sup> Executive Order deserves no less. Plus, the UK AI Safety Summit warrants a drop-by, and three copyright and right-to-publicity theories come under a judicial microscope. Read on to catch up. Let\u2019s stay smart together. (<\/em><a href=\"mailto:AI-Update@duanemorris.com?subject=Subscribe%20to%20the%20mailing%20list%20&amp;body=Please%20add%20me%20to%20The%20AI%20Update%20list.\"><em>Subscribe to the mailing list<\/em><\/a><em> to receive future issues.)<\/em><\/p>\n<p><!--more--><\/p>\n<p><strong>Mission: Executive Order. <\/strong>The White House grabbed most of the AI headlines over the last two weeks after President Biden on October 30 signed his \u201c<a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">Executive Order<\/a> on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.\u201d (If staff reports are to be believed, the villainous \u201cEntity\u201d AI in <em>Mission: Impossible \u2013 Dead Reckoning, Part One<\/em> had <a href=\"https:\/\/variety.com\/2023\/film\/news\/joe-biden-worried-ai-mission-impossible-dead-reckoning-1235775115\/\" target=\"_blank\" rel=\"noopener\">something<\/a> to do with it).<\/p>\n<p>The order itself is a sprawling, dense document, with applicability across various economic sectors from energy to healthcare to finance. Here\u2019s the gist:<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>The order calls for a \u201ccoordinated, Federal Government-wide approach\u201d to studying and regulating AI, issuing instructions to at least 20 federal agencies and departments\u2014in approximate order of appearance: Commerce, NIST, Energy, Homeland Security, NSF, State, Defense, Treasury, OMB, HHS, Labor, the USPTO, the Copyright Office, the FTC, the DOJ, Agriculture, Veterans Affairs, Transportation, Education, and the FCC.<\/li>\n<li>Further guidance will be forthcoming from these agencies within the next year, at intervals ranging from five to 12 months, depending on the agency. NIST, for example, is tasked with a prominent role in section 4 of the order: Over the next nine months, it is to establish guidelines, best practices, and testing environments for auditing, evaluating, and \u201cred teaming\u201d AI models. (If you have questions about the specifics of how the order applies to your industry, please don\u2019t hesitate to <a href=\"mailto:Duane%20Morris%20AI%20Update%20%3cAI-Update@duanemorris.com%3e?subject=October%2030,%202023%20AI%20Executive%20Order%20Question\" target=\"_blank\" rel=\"noopener\">reach out<\/a>).<\/li>\n<li>The order is especially concerned in the near-term with what it calls \u201cdual-use foundation models.\u201d These are defined, in section 3(k), as covering the largest models (\u201cat least tens of billions of parameters\u201d) that also \u201cpose a serious risk\u201d in three core areas: (1) \u201cchemical, biological, radiological, or nuclear\u201d weapons; (2) \u201cpowerful offensive cyber operations\u201d; and (3) \u201cevasion of human control or oversight through means of deception.\u201d<\/li>\n<li>Companies intending to develop \u201cpotential dual-use foundation models\u201d will have to report their plans to the federal government, including the results of any \u201cred-team testing\u201d (done to mitigate the risk of something going wrong). But these requirements cover only the largest of models\u2014those even larger than today\u2019s frontier models, like OpenAI\u2019s GPT-4 and Anthropic\u2019s Claude.<\/li>\n<li>Here at The AI Update, we\u2019ve been predicting since <a href=\"https:\/\/blogs.duanemorris.com\/artificialintelligence\/2023\/05\/16\/the-ai-update-may-16-2023\/\" target=\"_blank\" rel=\"noopener\">the spring<\/a> the eventual widespread adoption of requirements to identify and label AI-synthetized content. The order continues that trend: Section 4.5 calls for the Commerce Department to develop guidance for \u201cdigital content authentication and synthetic content detection measures,\u201d including digital \u201cwatermarking.\u201d<\/li>\n<li>Finally, for aficionados of contract definitions, section 3 of the order has some compact-yet-decently-accurate definitions of AI-specific terms like \u201cartificial intelligence,\u201d \u201cgenerative AI,\u201d \u201cAI model,\u201d \u201cAI red-teaming,\u201d and \u201cmodel weight.\u201d You may want to borrow them for your own tech contracts.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>The UK Safety Summit.<\/strong> President Biden\u2019s Executive Order scooped the recent <a href=\"https:\/\/www.aisafetysummit.gov.uk\/\" target=\"_blank\" rel=\"noopener\">UK AI Safety Summit<\/a>\u2014billed as \u201cthe first global AI Safety Summit\u201d\u2014by two days. The meeting, held November 1-2, produced <a href=\"https:\/\/www.gov.uk\/government\/publications\/ai-safety-summit-2023-the-bletchley-declaration\/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023\" target=\"_blank\" rel=\"noopener\">the Bletchley Declaration on AI Safety<\/a>, which includes the UK, US, China, Japan, India, and EU constituents among its 28 signatories. The declaration is similar in purpose and theme to the <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/statements-releases\/2023\/07\/21\/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai\/\" target=\"_blank\" rel=\"noopener\">voluntary commitment<\/a> 7 leading AI developers signed onto earlier this year in the US, covered in <a href=\"https:\/\/blogs.duanemorris.com\/artificialintelligence\/2023\/07\/27\/the-ai-update-july-27-2023\/\" target=\"_blank\" rel=\"noopener\">a prior issue<\/a> of The AI Update.\u00a0We remain hopeful that these voluntary initial undertakings are a road to somewhere more specific. Time will tell. South Korea and France are scheduled to host the next two summits, in <a href=\"https:\/\/www.reuters.com\/technology\/south-korea-france-host-next-two-ai-safety-summits-2023-11-01\/\" target=\"_blank\" rel=\"noopener\">2024<\/a>.<\/p>\n<p><strong>Some legal guidance in AI litigation.<\/strong> Back in the US, the federal <a href=\"https:\/\/www.courtlistener.com\/docket\/66732129\/andersen-v-stability-ai-ltd\/\" target=\"_blank\" rel=\"noopener\">case<\/a> brought by a group of artists against Stability AI, the maker of the popular text-to-image Stable Diffusion model, provided some interesting insights. The Northern District of California judge overseeing the lawsuit issued his <a href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.cand.407208\/gov.uscourts.cand.407208.117.0_2.pdf\" target=\"_blank\" rel=\"noopener\">opinion<\/a> dismissing most of the claims for now, but giving the artists an opportunity to try again. Here are three core takeaways:<\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>As expected, the court allowed the artists to go forward on the theory that the use of their registered copyrighted works to train the Stable Diffusion model violated the artists\u2019 copyright interests. These \u201ctraining data\u201d claims are the most popular theory of infringement in the current crop of generative AI cases.<\/li>\n<li>On the flip side, the court dealt a blow to one of the artists\u2019 most ambitious copyright theories: that every output Stable Diffusion synthetizes is, by definition, an infringing \u201cderivative work\u201d of copyrighted images in the training data. The court seemed persuaded that a generated output must still be substantially similar to a copyrighted work (the classic test for copyright infringement) before it is deemed a \u201cderivative work.\u201d<\/li>\n<li>Lastly, for the right-to-publicity claim, the court stressed that the named plaintiff artists had to show that Stability AI used <em>their <\/em>individual names\u2014not the names of other artists or artistic styles generally\u2014to advertise or promote the Stable Diffusion service. That kind of showing is a big and difficult task in most cases.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>What we\u2019re reading: <\/strong>A start-up named Vectara recently published a \u201c<a href=\"https:\/\/github.com\/vectara\/hallucination-leaderboard\/tree\/main\" target=\"_blank\" rel=\"noopener\">Hallucination Leaderboard<\/a>,\u201d attempting to evaluate how often the leading large language models hallucinate under simulated conditions. Vectara prompted each LLM to summarize 1,000 short news documents and counted how made-up facts were included. The winner? GPT-4, with a hallucination rate of 3%. The worst performer? Google\u2019s Palm-Chat, with a 27% rate. But that was an outlier: Most of the models tested had rates in the mid-to-high single digits. Still, in the knowledge business, even a single-digit percentage of factual inaccuracy may not be good enough.<\/p>\n<p><strong>What <em>should <\/em>we be following?<\/strong> Have suggestions for legal topics to cover in future editions? Please send them to <a href=\"mailto:AI-Update@duanemorris.com\">AI-Update@duanemorris.com<\/a>. We\u2019d love to hear from you and continue the conversation.<\/p>\n<p><strong><em>Editor-in-Chief<\/em><\/strong><strong>: <\/strong><a href=\"mailto:agoranin@duanemorris.com\">Alex Goranin<\/a><\/p>\n<p><strong><em>Deputy Editors<\/em><\/strong><strong>:<\/strong> <a href=\"mailto:mcmousley@duanemorris.com\">Matt Mousley<\/a> and <a href=\"mailto:tmarandola@duanemorris.com\">Tyler Marandola<\/a><\/p>\n<p><em>If you were forwarded this newsletter, <\/em><a href=\"mailto:AI-Update@duanemorris.com?subject=Subscribe%20to%20the%20mailing%20list%20&amp;body=Please%20add%20me%20to%20The%20AI%20Update%20list.\"><em>subscribe to the mailing list<\/em><\/a><em> to receive future issues.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>#HelloWorld. The days are now shorter, but this issue is longer. President Biden\u2019s October 30th Executive Order deserves no less. Plus, the UK AI Safety Summit warrants a drop-by, and three copyright and right-to-publicity theories come under a judicial microscope. Read on to catch up. Let\u2019s stay smart together. (Subscribe to the mailing list to &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/blogs.duanemorris.com\/artificialintelligence\/2023\/11\/08\/the-ai-update-november-8-2023\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The AI Update | November 8, 2023&#8221;<\/span><\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[117,15,16,13,17,118],"ppma_author":[5],"class_list":["post-170","post","type-post","status-publish","format-standard","hentry","category-general","tag-ai-litigation","tag-alex-goranin","tag-matt-mousley","tag-theaiupdate","tag-tylermarandola","tag-uk"],"authors":[{"term_id":5,"user_id":6,"is_guest":0,"slug":"duanemorris3","display_name":"Duane Morris","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/843ff6e7a8fe5fc92109b47a45f34b6cf0ea499e6e788db23456c838b0ae6747?s=96&d=blank&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/posts\/170","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/comments?post=170"}],"version-history":[{"count":0,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/posts\/170\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/media?parent=170"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/categories?post=170"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/tags?post=170"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.duanemorris.com\/artificialintelligence\/wp-json\/wp\/v2\/ppma_author?post=170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}