New US framework for AI diffusion, and pact to share info on threats to AI models

An interim final rule on AI diffusion is designed to streamline licensing hurdles for both large and small chip orders and add clarity to partner nations about how they can benefit from AI.

A new rule from the US Commerce Department is instituting a series of export controls designed to secure what the agency calls “the responsible use and diffusion of artificial intelligence,” as the next iteration of the Biden administration’s ongoing effort to connect AI to national security and foreign policy initiatives.

And the US government joined tech firms to commit to share information with one another under a new plan for reporting and trading details on ongoing security threats posed to AI models. The plan has been issued by a division of the Cybersecurity and Infrastructure Security Agency (CISA).

Let’s examine the details.

Framework for AI diffusion

The regulatory framework the Bureau of Industry and Security (part of Commerce) announced this week places controls on specific closed AI model weights as well as on advanced computing chips. The rule also establishes controls on new license exceptions and updates to Data Center Validated End User (VEU) authorization.

Commerce Secretary Gina Raimondo said in a statement that the policy would help construct a globally trusted tech ecosystem and assist the US in protecting itself against national security threats associated with AI, while not compromising innovation.

The new controls are intended to safeguard rapidly developing AI models from malicious actors that seek to threaten US national security and foreign policy, the press release announcing the rule said. It pointed specifically to the development of chemical or biological weapons, offensive cyber operations, mass surveillance, and human rights abuses.

“Managing these very real national security risks requires taking into account the evolution of AI technology, the capabilities of our adversaries, and the desire of our allies to share in the benefits of this technology,” Raimondo said. “We’ve done that with this rule, and it will help safeguard the most advanced AI technology and help ensure it stays out of the hands of our foreign adversaries, while we continue to broadly share the benefits with partner countries.”

The Biden administration said the rules were written to ensure that “humanity can reap” the technology’s “critical benefits,” opening up access to the most advanced US computing processes and use of AI technology. But that has not appeased some industry skeptics.

Jason Oxman, president and CEO of trade group the Information Technology Industry Council, said in a statement that the global export controls on AI-related advanced compute tools “threatens to fragment global supply chains and discourage the use of US technology.”

Protecting AI models

This new plan sets out a framework for reporting and swapping details about ongoing security threats that have targeted or are currently targeting AI models. The actual playbook for the reporting and information-sharing was issued by CISA and its AI-focused arm, the Joint Cyber Defense Collaborative (JCDC).

The industry players contributing to the playbook’s content include Anthropic, Amazon Web Services, Google, Microsoft and OpenAI.

Besides serving as a means for reporting details about ongoing attacks and new vulnerabilities, the playbook includes directions for various scenarios, such as reporting suspicious behavior or sharing new reports about new threat actors.

The playbook is designed for security analysts, incident responders and other technical staff, and both CISA’s and JCDC’s fate is unknown right now as a new administration assumes office, particularly given some GOP leaders calling for CISA’s elimination.

Either way, the company plans to keep sharing intel with its JCDC partners, as Alex Levinson, head of security at Scale AI, told Axios this week.