News

White House releases AI safety executive order

On Oct. 30, the White House released an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This 111-page document mainly concerns new report requirements for private companies making use of large machine learning models, report requirements for government agencies, and an intention to eventually make more substantial standards and regulations. It standardizes definitions in an attempt to make the development of future AI regulation more straightforward.

The Order describes its motivation as follows: “Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” The Order focuses mainly on what it call “dual-use foundation models,” which “means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matter.”

The National Institute of Standards and Technology (NIST), along with the Departments of Energy and Homeland Security, are tasked with coming up with best-practice guidelines on red-team (adversarial) AI safety testing. They will also create publicly available testbeds.

The Order invokes the Defense Production Act to require companies developing dual-use foundation models to report on their activities, the ownership, possession, and security of their model weights, the results of any red-team testing, and further safety plans. Companies, organizations, and individuals must also report to the government when they acquire a large-scale computing cluster. Before better NIST standards are developed, these reporting requirements apply to models trained with computing power greater than 1026 (of 1023 if it was trained on biological data) integer or floating point operations, and computing clusters capable of 1020 floating point or integer operations per second. These are very high bars for computing power.

Essentially, this EO will entail a great number of meetings between bureaucrats, and various agencies are to collaborate on guidelines and frameworks.

The Order also seeks to “develop effective labeling and content provenance mechanisms.” This task is apportioned to the Department of Commerce, who is supposed to “authenticat[e] content and [track] its provenance; [label] synthetic content, such as using watermarking; [detect] synthetic content” and prevent AI from being used to generate illegal pornography. The watermarking system—for the time being—will not be legally required of private companies, only of the federal government.

There is also an intention, with input from academia, AI companies, and others, to come up with some sort of report on the risks and benefits of open-source AI models.

The government is also to consider the AI-related risks of releasing machine-readable federal data in accordance with the Open, Public, Electronic, and Necessary Government Data Act.

The Order also contains a section on the streamlining of visas for foreigners looking to study, work on, or research AI and other critical technologies in the United States. The State Department is to make these visas available, and must consider updating both the Exchange Visitor Skills List and the number of students and researchers who qualify for domestic visa renewals. The State Department and Homeland Security are also to looking to improve immigration pathways for highly-educated technology experts and startup founders, including making it easier for their families to secure permanent residency.

The NSF is directed to establish various AI research institutes and train 500 new AI researchers by 2025. There will also be federal grants for AI research, some of which are for underprivileged people specifically.

Various federal law enforcement agencies are to come up with plans to deal with AI-related intellectual property theft.

The Department of Energy and various federal environmental regulators are to come up with ways to use AI to “improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans.” This streamlines regulation, permitting, and improving grid resilience. They are also to use AI to do basic science and climate science research. This is only about half of the document, with more to come.