New White House Executive Order Highlights Increased Complexity in AI Regulation – A Cross-Practice Overview

Section authors: Sandra A. Jeskie, Michelle Hon Donovan, Robert Carrillo, Ariel Seidner, Milagros Astesiano, Alex W. Karasik, Geoffrey M. Goodale, Neville M Bilimoria, Edward M. Cramp, Ted J. Chiappari, Kristopher Peters and M. Alejandra Vargas.

The White House’s October 30, 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“EO”) signals increased governmental regulation over the development and use of artificial intelligence (“AI”) models.  While the United States currently does not have a comprehensive AI regulation regime, many federal government agencies already regulate the use and development of AI through a complex framework of rules and regulations.  President Biden’s EO promises to add a new layer of complexity by introducing sweeping changes affecting a wide variety of industries.  Duane Morris’ multi-disciplinary team of AI attorneys are ready to help clients working with AI tools abreast of new regulations in this rapidly-evolving area of law.  Below, we summarize the most significant changes stemming from the White House’s most recent AI EO.

Responsible AI:  The EO gives the National Institute of Standards and Technology (“NIST”) 270 days to act in concert with the Department of Agency and other governmental stakeholders to create new guidelines, standards, and best practices for AI safety and security.  These standards will include (among other requirements), a companion piece to NIST’s AI Risk Management Framework for generative AI use-cases, the development of a secure software development framework for AI, and “dual-use foundation models” (defined by the EO to include AI models trained on broad data, continuing at least 10 billion parameters, and are capable of posing a risk to national economic security and public safety), as well as standards to conduct AI “red-teaming” tests to evaluate model safety.

Leveraging the Defense Production Act, the EO also requires private companies working on dual-use foundation models to report to the government, on an ongoing basis, extensive development information such as data on model weighting and red-teaming test results in order to ensure the ensure responsible AI development.  Companies who possess or develop “large-scale computing clusters”, as will be defined by the Secretary of Defense and other federal government stakeholders, and companies providing “Infrastructure as a Service” to the United States, will also have new reporting responsibilities.  The Secretary of Commerce will have 90 days to promulgate the applicable reporting scheme.

Information Security:  Information security strength is a theme of the EO, discussed in several contexts throughout balancing the need to identify and mitigate security risks associated with using AI while also to leveraging the benefits of AI.  The EO emphasizes the importance of maintaining physical, administrative, and technical safeguards that help protect against potentially dangerous exploitation of AI systems. In addition to developing guidelines that complement the NIST AI Risk Management Framework, the EO calls for federal agencies to be diligent in risk-mitigation practices including proper staff training and the negotiation of appropriate terms of service with third-party providers to ensure the protection of government information is extended to its vendors (noting confidentiality, compliance, privacy, and data protection requirements). The EO also cites the need for the development and maintenance of secure testing environments (“testbeds”) in which systems can be developed and evaluated before deployment.

The EO also addresses managing use of AI to protect critical infrastructure.  It directs the Secretary of Homeland Security to establish and chair an AI Safety and Security Advisory Board (AISSB) made up of leading industry experts, software companies, research labs, critical infrastructure entities, and the U.S. government to issue recommendations and best practices ensure AI use in this area is secure and resilient.  The DHS is also instructed to leverage AI to improve U.S. cyber defense.

Intellectual Property: The EO emphasizes the need for the U.S. to promote responsible innovation, competition and collaboration via investments in AI-related education, training, research and development, capacity while addressing intellectual property (“IP”) rights issues and challenges faced by inventors and creators. As part of the White House’s efforts towards the promotion of innovation, the EO directs several mandates with the aim of clarifying issues around AI and its potential to develop IP assets. Specifically, the EO directs the Under Secretary of Commerce for IP and the U.S. Patent and Trademark Office (“USPTO”) to issue guidance on the patent eligibility of inventions developed using AI, as well as other emerging issues at the intersection of AI and IP. The EO also requires the director of the USPTO to consult with the Director of the U.S. Copyright Office and provide recommendations for further executive action, including potential copyright protection for works created using AI and the treatment of copyrighted works in connection with training AI models. In this line, the Secretary of Homeland Security is to develop a training, analysis, and evaluation program to mitigate AI-related IP risks.

Privacy: The EO reflects the U.S. government concerns that AI may exacerbate risks to privacy, including by its facilitation of the collection or use of information about individuals, or the concluding of inferences about them. To address such risks, the EO mandates the federal government to ensure that the collection, use, and retention of personal data is lawful and secure. Specifically, the EO directs the federal agencies the main following actions: (1) the evaluation of how agencies collect and use commercially available information —particularly containing personally identifiable information— and strengthening of privacy guidance for federal agencies to mitigate related privacy risks; (2) the prioritization of federal support for accelerating the development and use of privacy-enhancing technologies (“PETS”); (3) the creation of a Research Coordination Network dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of PETs; and (4) the development of guidelines for federal agencies to evaluate the effectiveness of PETs, including those used in AI systems. Importantly, with this order President Biden calls for Congressional action to pass bipartisan data privacy legislation to protect all Americans against the data privacy risks posed by AI.

Healthcare:  The EO calls upon the U.S. Department of Health & Human Services (“HHS”) to develop a strategic plan for the development of AI-enabled healthcare technology tools within 365 days through an HHS AI Task Force.  The AI Task Force is to consider possible regulation and input on the responsible deployment of technologies for healthcare, including in advances in the drug development process.  The AI Task Force is also to focus on addressing healthcare challenges for underserved communities, veterans, and small businesses.  As expected, HHS is also to provide guidance on the safety, privacy, and security of patient information in the development of AI software tools, especially to avoid cybersecurity threats.  Other AI healthcare initiatives may also be reviewed and addressed by HHS as they develop.

Education: The EO identifies education as a critical field where the federal government will take advantage of advances in AI technologies, but also protect consumers and the public from adverse impacts.  Job training and education will provide access to students to learn about AI.  Resources will be made available to those who experience displacement in the workforce due to AI.  The EO makes clear that the federal government will continue to enforce existing consumer protections as AI evolves.  These include those safeguarding consumers from “fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.”

The EO also directs the Secretary of Education to develop policies concerning the use and impact of AI in education in consultation with stakeholders.  This will include the creation of an “AI toolkit” for institutions to use in implementing the Department’s recommendations concerning appropriate use of AI, including human review of AI decisions, the design of AI to enhance trust and safety, and alignment of AI systems with U.S. privacy laws and regulations, among other things.

Export Controls: While the EO does not explicitly address export controls, many aspects of AI (as defined in Section 3 of the EO) are embedded within both the Export Administration Regulations (EAR) administered by the U.S. Department of Commerce (Commerce) and the International Traffic in Arms Regulations (ITAR) enforced by the U.S. Department of State (State).  Moreover, the EO does authorize the Secretary of Commerce to take action pursuant to the International Emergency Economic Powers Act (IEEPA), as may be necessary.  Accordingly, we expect both Commerce and State to engage in reviews to update the control lists under both the EAR and ITAR and to issue regulations in this area that will result in additional export controls.

Employment:  In line with the White House’s commitment to advance equity and civil rights, the EO endeavors to ensure that AI is used responsibly to improve workers’ lives and provide safeguards against harmful use. Within 180 days of issuing the EO, the Secretary of Labor is tasked with consulting with agencies and outside entities (including labor unions and workers) to develop and publish principles and best practices for employers to maximize AI’s potential benefits. These key principles and best practices will focus on job-displacement, labor standards, job quality, employers’ AI-related collection and use of worker data, and preventing harm to employees’ well-being. With AI becoming increasingly used in processes for making employment decisions, the EO confirms that the federal government is fully tuned in to this emerging technology and closely scrutinizing its potential impact.

Immigration: The Biden Administration’s recent EO regarding AI and other critical and emerging technologies as part of a broad mandate to promote innovation and competition, includes directing the Departments of State, Labor and Homeland Security to “expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.”  The key EO directives concerning immigration include: (i) Expanding visa availability and streamlining visa processing for AI work, study, or research; (ii) Implementing a domestic visa renewal program rather than requiring AI workers to leave the U.S. to renew their visas; (iii) Modernizing immigration pathways and streamlining the green card process for AI specialists with extraordinary ability or advanced degrees and for AI investors and entrepreneurs; and (iv) Expanding the list of “Schedule A” occupations that benefit from a streamlined permanent labor certification process.

These sweeping directives touch virtually every aspect of employment-based immigration in the United States and represent a creative and timely approach to address labor shortages and attract foreign talent in order to facilitate U.S. innovation and competition. If implemented effectively, this EO could go a long way to provide more streamlined, predictable paths to work visas and legal immigration.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress