The US Department of Commerce’s Bureau of Industry and Security (BIS) just released a Notice of Proposed Rulemaking outlining a new mandatory reporting requirement for the world’s leading AI developers and cloud providers.
The proposed rule requires developers of advanced artificial intelligence models and cloud computing networks to to provide new and more detailed reporting to the federal government.
This disclosure includes reporting about developmental activities, cybersecurity measures, and the outcomes from red-teaming efforts (mock cyber drills), which involve testing for dangerous capabilities. Tests range from assessing the ability of assailants to engage in cyberattacks to finding new ways for non-experts to gain entry into networks to develop chemical, biological, radiological, or nuclear weapons.
The government is seeking to understand what the companies behind innovations such as generative AI are doing to prevent the technology from being misused by foreign adversaries and therefore posing a threat to US national security.
“This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems,” said Under Secretary of Commerce for Industry and Security Alan F Estevez. “It would build on BIS’s long history conducting defense industrial base surveys to inform the American government about emerging risks in the most important US industries, Estevez said.
US government AI initiatives
This proposal follows an executive order from the White House issued last year requiring AI developers to notify the government of any technology that could pose a national security risk, and requiring them to share the results of safety tests before public release.
Last year, the Biden-Harris administration also secured voluntary commitments from leading AI companies “to help move toward safe, secure, and transparent development of AI technology.” The voluntary commitments included ensuring products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust.
For its part, in July, Commerce issued guidance to help AI developers evaluate and mitigate the risks stemming from generative AI and dual-use foundation models. That guidance and software aims to help improve the safety, security and trustworthiness of AI systems, and it arises from the work of the US AI Safety Institute.
And last January, in collaboration with the private and public sectors, the National Institute of Standards and Technology developed a framework to better manage risks – to individuals, organizations, and society – that are uniquely associated with AI.