Newsom vetoes historic California AI safety bill

The bill would have codified a host of restrictive AI standards. But did it miss the point?

California’s landmark AI bill, SB 1047, has been short-circuited with a veto from governor Gavin Newsom.

The bill would have included a “kill switch” for rogue AI programs, and mandatory safety reporting to a new regulatory agency, the Board of Frontier Models. It also would have created whistleblower protection for employees who call out their companies for dangers posed by their AI models.

The California State Assembly passed the bill 29-9 in August. Championed by state senator Scott Wiener, the bill would have been one of the most restrictive and wide-ranging pieces of AI legislation ever promulgated in the US.

Previously Newsom had given little indication of his opinion of the bill, so his veto will come as a welcome development to some key industry players who contended that it would stifle innovation in the field and make it difficult for startups to enter the space.

Opponents of the bill included California-based AI developer OpenAI and venture capital firm Andreessen Horowitz.

Nine members of Congress who represent California districts, including former House Speaker Nancy Pelosi, also urged Newsom to veto the bill.

Others who have voiced their concern over accountability and risk in the ever-expanding AI industry lent their support to the bill.

This list included Elon Musk, who had previously requested a moratorium on AI development despite his involvement in xAI, which produces the LLM (large language model) Grok.

The prominent AI developer Anthropic said the bill’s benefits ultimately outweighed its costs, despite initially opposing it.

The risk of AI at scale

The bill ultimately targeted some of California’s biggest AI players instead of startups or smaller companies, pointing to the role that scale and consolidation of technology could play in creating catastrophic risk.

SB 1047would have subjected two tiers of model to its regulatory requirement. This includes models that:

  • Cost over $100m to develop and are trained using computing power “greater than 10^26 integer or floating-point operations” (FLOPs); or
  • Are based on covered models [costing $100m+] and fine-tuned at a cost of over $10m and using computing power of three times 10^25 integer or FLOPs.

However, Newsom dismissed the threat of outsized risks created by massive large language models. Instead, he said that smaller models were equally likely to create risks as the massively complex models produced by OpenAI and Anthropic.

“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” Newsom wrote in his veto statement.

Startup AI companies already face significant hurdles, such as the exorbitant prices of the chips that are required to train models on vast quantities of data.

Geoffrey Hinton and Yoshua Bengio, two Turing-award recipients who supported the bill, identified biological weapon deployment and critical infrastructure cyberattacks as catastrophes that AI could inadvertently facilitate.  

To combat these risks, proponents of the bill have underscored the necessity of a “kill switch” provision that could shut down an LLM as soon as it went awry, and frequent testing that could identify systemic faults before disaster occurs.

However, others have described those doomsday scenarios as fanciful treks into the realm of science fiction that do not correspond to AI’s most common and tangible risks, such as deepfaking, job automation, and market manipulation.

Other state AI regulations have targeted these more mundane but significant harm factors.

For instance, Colorado passed AI legislation last May that deals more with consumer protection than with mass chemical attacks.

AI doesn’t harm people, people do

Some opponents of SB 1047 have stated that it’s more important to go after who is using the model and for what, rather than after the model itself.

Instead of focusing on the chance that an AI might become faulty or sentient, it would do better to take focused action against companies and individuals who deploy AI for morally dubious, dangerous or illegal purposes.

These issues were addressed by Andrew Ng, founder of Coursera and Google Brain, in an interview with The Verge’s Kyle Robison.

“When someone trains a large language model…that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” he said.

“And the risk of AI is not a function. It doesn’t depend on the technology – it depends on the application.”

Governor Newsom also addressed the specificity issue in his veto statement criticizing the blanket-like nature of SB 1047.

“…SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” he said.