This is not California state Senator Scott Wiener’s first attempt at addressing the dangers of AI. In 2024, Silicon Valley mounted a fierce campaign against his controversial AI safety bill, SB 1047, which would have made tech companies liable for the potential harms of their AI systems. Tech leaders warned that it would stifle America’s AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing similar concerns. A popular AI hacker house promptly threw a party to celebrate the veto, with one attendee remarking, “Thank god, AI is still legal.”
Now Wiener has returned with a new AI safety bill, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto in the coming weeks. This time, the bill appears to be more popular, or at least, Silicon Valley does not seem to be at war with it. Anthropic endorsed SB 53 earlier this month. A Meta spokesperson stated that the company supports AI regulation that balances guardrails with innovation and said SB 53 is a step in that direction, though there are areas for improvement. A former White House AI policy advisor called SB 53 a victory for reasonable voices and thinks there is a strong chance the governor will sign it.
If signed, SB 53 would impose some of the nation’s first safety reporting requirements on AI giants like OpenAI, Anthropic, xAI, and Google. These companies currently face no obligation to reveal how they test their AI systems. Many AI labs voluntarily publish safety reports explaining potential dangers, but they do this at their own discretion and the reports are not always consistent. The bill requires leading AI labs, specifically those making more than five hundred million dollars in revenue, to publish safety reports for their most capable models. It focuses on the worst kinds of AI risks, such as the ability to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is also considering other bills that address different AI risks, such as engagement-optimization techniques in AI companions.
SB 53 also creates protected channels for employees at AI labs to report safety concerns to government officials. It establishes a state-operated cloud computing cluster, called CalCompute, to provide AI research resources beyond what the big tech companies offer. One reason SB 53 may be more popular than its predecessor is that it is less severe. SB 1047 would have made AI companies liable for any harms caused by their models, whereas SB 53 focuses on requiring self-reporting and transparency. SB 53 also applies narrowly to the world’s largest tech companies rather than startups.
However, many in the tech industry still believe states should leave AI regulation to the federal government. In a recent letter to Governor Newsom, OpenAI argued that AI labs should only have to comply with federal standards. The venture firm Andreessen Horowitz suggested that some California bills could violate the Constitution’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.
Senator Wiener addresses these concerns by stating he lacks faith in the federal government to pass meaningful AI safety regulation, so states need to step up. He believes the Trump administration has been captured by the tech industry and that recent federal efforts to block all state AI laws are a way of rewarding political funders. The Trump administration has shifted away from the Biden administration’s focus on AI safety, replacing it with an emphasis on growth. Vice President J.D. Vance stated at a conference that his focus was on AI opportunity, not safety. Silicon Valley has applauded this shift, and tech CEOs are now frequently seen at the White House.
Senator Wiener believes it is critical for California to lead the nation on AI safety without choking off innovation. In a recent interview, he discussed his journey to regulate AI safety. He described it as a roller coaster and a rewarding learning experience that has helped elevate the issue globally. He emphasized the need to ensure powerful new technology benefits humanity while reducing risk, promoting innovation while being mindful of public health and safety.
When asked about lessons from the last twenty years of technology, Wiener, who represents San Francisco, said he has seen how large tech companies have been able to stop federal regulation. He expressed concern about deals between tech leaders and the current administration, but clarified he is not anti-tech. He wants innovation to happen but does not believe the industry can be trusted to regulate itself, stating that sensible regulations are needed to protect the public interest within a capitalist system.
SB 53 focuses on the worst potential harms of AI, such as death, massive cyberattacks, and bioweapons, because Wiener believes these catastrophic risks need specific attention. He does not think AI systems are inherently safe, but acknowledges many people in AI labs work to mitigate risk. The goal is to make it harder for bad actors to cause severe harm.
Regarding industry conversations, Wiener noted that Anthropic has been constructive. While other large AI labs may not support SB 53, they are not opposing it with the same intensity as they did SB 1047, which was a liability bill whereas SB 53 is a transparency bill. Startups have been less engaged this year because the bill focuses on the largest companies.
Wiener said he feels no pressure from large AI political action committees, noting that various groups have spent millions against him in the past without changing his policy approach. His goal is to do right by his constituents.
His message to Governor Newsom is that the legislature heard his concerns from the SB 1047 veto. They followed the path laid out by the governor’s working group and hope they have reached an agreement that merits his signature.

