California’s new AI safety law shows regulation and innovation don’t have toclash

California Governor Gavin Newsom has signed SB 53, a landmark AI safety and transparency bill, into law. This legislation demonstrates that state regulation does not have to hinder the progress of artificial intelligence. According to Adam Billen, vice president of public policy at the advocacy group Encode AI, policymakers recognize the need for action and understand how to craft laws that protect innovation while ensuring product safety.

At its core, SB 53 is a first-in-the-nation bill that mandates large AI labs to be transparent about their safety and security protocols. These protocols specifically address how companies prevent their models from being used for catastrophic risks, such as cyber attacks on critical infrastructure or the creation of bio-weapons. The law also requires companies to adhere to these protocols, with enforcement handled by the Office of Emergency Services.

Billen states that the activities required by the bill are already common practice among companies, which typically perform safety testing and release model cards. However, he notes that some companies have begun to reduce their efforts in certain areas, underscoring the importance of such legislation. He also points out that some AI firms have policies allowing them to relax safety standards under competitive pressure. For example, OpenAI has publicly stated it may adjust its safety requirements if a rival releases a high-risk system without similar safeguards. Billen argues that policy can help enforce companies’ existing safety promises and prevent them from cutting corners.

Public opposition to SB 53 was quieter compared to its predecessor, SB 1047, which Governor Newsom vetoed last year. Despite this, a common sentiment in Silicon Valley and among AI labs is that almost any AI regulation is harmful to progress and could hinder the United States in its race against China. This belief has led companies like Meta, venture capital firms like Andreessen Horowitz, and individuals like OpenAI president Greg Brockman to invest heavily in super PACs supporting pro-AI politicians. These same forces previously pushed for a moratorium that would have banned states from regulating AI for ten years.

Encode AI led a coalition of over 200 organizations to successfully strike down that proposal. Billen notes the fight is not over, as Senator Ted Cruz, who championed the moratorium, is now pursuing a new strategy to achieve federal preemption of state laws. Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to bypass certain federal regulations for up to ten years. Billen also anticipates a forthcoming federal AI standard bill that would be presented as a compromise but would effectively override state laws. He warned that such narrowly scoped federal legislation could eliminate federalism for this critical technology.

While Billen agrees that the AI race with China is important and that regulation should support American progress, he believes eliminating state bills is not the solution. State legislation typically focuses on issues like deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental AI use. He asserts that bills like SB 53 are not what will stop the U.S. from beating China, calling such a claim intellectually dishonest. He suggests that those focused on winning the AI race should advocate for measures like export controls on advanced chips and ensuring American companies have the necessary components, which is not the industry’s current focus.

Legislative efforts like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking. The existing CHIPS and Science Act also seeks to boost domestic chip production. However, major tech companies, including OpenAI and Nvidia, have shown reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security. Nvidia, in particular, has a financial incentive to continue selling chips to China, which has historically represented a significant portion of its revenue. Billen speculates that OpenAI might avoid advocating for chip exports to maintain a good relationship with key suppliers like Nvidia.

The Trump administration has sent mixed signals on this issue. After expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course months later, allowing Nvidia and AMD to sell some chips to China in exchange for a percentage of the revenue. Billen observes that while some in Congress are moving towards bills with export controls, there is a simultaneous narrative to kill state bills that are relatively light-touch.

Billen describes SB 53 as an example of democracy in action, where industry and policymakers collaborate to create a bill that everyone can support. He acknowledges the process is messy but believes this foundation of democracy and federalism is crucial for the country and its economic system. He sees SB 53 as one of the best examples that this process can still work effectively.