Takeaways From the Biden Administration’s Executive Order on AI

Client Alert

On October 30, President Biden signed a highly anticipated “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI). The 110-page order addresses a wide range of AI-related challenges and proposes a “coordinated, Federal Government-wide approach” to promote safety, privacy, equity, innovation and international cooperation while ensuring responsible government use of AI. This builds on the administration’s previous AI pronouncements, including the Blueprint for an AI Bill of Rights and an executive order directing agencies to combat algorithmic discrimination. The order foreshadows future reports or guidance from over a dozen federal agencies.

Here is a summary of the key provisions:

Ensuring safety and security of AI technology

The order emphasizes the need for new standards governing AI safety, especially related to national security and critical infrastructure. If an AI company develops a “dual-use foundation model” that poses a serious risk to national security, national economic security or national public health, the order directs the AI company to make periodic reports to the federal government and share safety test results.

The order also calls on multiple federal agencies to develop AI safety guidelines. For example, the National Institute of Standards and Technology (NIST) must establish guidelines and best practices to ensure AI systems are safe, secure and trustworthy before public release. In this effort, NIST must (1) expand its AI Risk Management Framework to cover generative AI tools and (2) develop standards to stress test AI tools for potential vulnerabilities through an adversarial process known as red-teaming tests. To build trust in government data, the Department of Commerce must develop guidelines to “watermark” and authenticate AI content generated by the government. For its part, the Department of Energy must develop and implement a plan to identify where AI outputs may “represent nuclear, nonproliferation, biological, chemical, critical-infrastructure, and energy-security threats or hazards.”

Protecting privacy

The order urges Congress to pass data privacy legislation to protect Americans from the risks posed by AI and specifically references children’s privacy as a priority.

The order also directs the government to create a “Research Coordination Network” to promote agency use of “Privacy Enhancing Technologies” (PETs), which are hardware or software tools that allow use of group or personal data while protecting the identity of the underlying individuals. Meanwhile, NIST is tasked with evaluating how PETs can be used to enhance privacy in AI tools.

The order also directs the Office of Management and Budget (OMB) to evaluate how federal agencies collect, use and share commercially available information that may contain personally identifiable data.

Advancing equity and civil rights

To prevent AI-driven discrimination and bias, the order emphasizes the administration’s commitment to addressing algorithmic discrimination in various sectors, including housing, federal benefits programs and federal contracting. The order also requires the Department of Justice and other agencies to report on the use of AI in the criminal justice system, including for surveillance, forensic analysis, sentencing and parole.

Protecting consumers and patients

The order calls for vigorous consumer protections in housing, health care and the economy in general. For example, the order requires housing agencies and the Consumer Financial Protection Bureau to address potential AI bias in loan underwriting, tenant screening and the sale of financial products. The order also requires the Department of Health and Human Services (HHS) to establish an AI task force to develop a strategic plan “on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health).”

Supporting workers

In light of potential job displacements caused by AI, the order stresses the importance of worker rights, safety and retraining. The order requires a report on AI’s potential labor-market impacts and directs the Department of Labor to publish AI best practices for employers related to hiring, worker evaluations, compensation, safety, monitoring, the right to organize and other workplace activities.

Promoting innovation and competition

The order seeks to position the United States as a leader in AI innovation and includes plans to accelerate AI research. The order pledges support for small AI developers and entrepreneurs to compete with larger companies. It also expands AI research grants in areas such as health care and climate change. To further boost AI innovation, the order also streamlines visa criteria for skilled immigrants with AI expertise.

To promote fair competition, the order encourages the Federal Trade Commission to apply its antitrust and consumer protection authority. The order also directs the U.S. Patent and Trademark Office and the U.S. Copyright Office to issue respective guidance on the proper protection of inventions or creative works that incorporate AI.

Ensuring responsible and effective government use of AI

The order calls for expanded but responsible government use of AI. This includes a “government-wide AI talent surge” to recruit and hire technical experts across federal agencies.

Government agencies will need to follow new rules on the responsible deployment of AI products and services, to include the appointment of chief AI officers, establishment of AI governance boards, safeguards against discrimination, testing of vendor tools and training for staff.

Strengthening American leadership abroad

In an effort to “strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges,” the order encourages international collaboration on AI safety and requires secretaries of Commerce and State to create a plan to work with key international partners on global technical standards. The order also emphasizes important voluntary commitments and actions already made by U.S. technology companies.

Takeaways

By its nature, the executive order focuses heavily on national security and specific agency tasks that the administration oversees. For more comprehensive AI regulation, Congress would need to act. Still, there are few sectors of the economy left untouched by this order.

Here are some key takeaways:

Greater AI transparency

Most significantly, the order requires large AI platforms to provide the federal government with information about their training data, models and security. The administration points to national security and the Defense Production Act as the bases for this requirement, but the demand echoes calls for greater AI transparency by consumers, privacy advocates, business customers and regulators. This order may tip the market toward more disclosures in order to foster growth and greater trust in AI.

A road map for AI contracting and compliance

The order will require federal agencies to enact broad AI compliance programs—to assign leaders to AI governance, train staff on AI risks, incorporate safeguards into vendor contracts and apply “record-keeping, cybersecurity, confidentiality, privacy, and data protection requirements” to their AI programs. If successfully launched, these measures could serve as models for private-sector AI compliance programs.

Evolving data protection

The executive order gives a boost to three data protection ideas percolating in the private sector. For example, it encourages federal agencies to “watermark” their AI content, similar to the labeling already advocated by groups like the Coalition for Content Provenance and Authenticity (C2PA). The order also promotes more research into “Privacy Enhancing Technologies” (PETs) and advocates “red-teaming,” a practice well-known to cybersecurity experts who extensively use “white hat hackers” to stress test their systems. The order requires NIST to establish not only AI red-teaming guidelines but also data test beds that could lead to wider adoption. In short, the order helps ensure that “watermarking,” “PETs,” and “red-teaming” will become regular terms in our AI lexicon.

Growing employment challenges

The order emphasizes long-standing employment concerns over bias in AI models and encourages federal agencies to vigorously enforce their antidiscrimination laws. At the same, the order recognizes the need for more AI talent. Therefore, it recommends modifying standards for government hires and expanding visa pathways for foreign AI experts. These measures will mean more work for human resources departments, employment lawyers and immigration attorneys.

Expanding health care opportunities

Besides national security, health care is a major focus of the executive order. The order hands a broad mandate to the HHS AI Task Force and requires studies related to patient safety, pre- and post-market technology assessments, health care financial assistance, and drug development. Therefore, the order will likely make AI a priority issue for providers, payers, device manufacturers, pharmaceutical companies, public health officials, regulators and patients.

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved