California’s state senate recently gave final approval to a new AI safety bill, SB 53, sending it to Governor Gavin Newsom to either sign or veto. If this all sounds familiar, it is because Newsom vetoed another AI safety bill, also written by state senator Scott Wiener, last year. But SB 53 is narrower than Wiener’s previous SB 1047, with a focus on big AI companies making more than $500 million in annual revenue.
I discussed SB 53 with my colleagues Max Zeff and Kirsten Korosec on the latest episode of TechCrunch’s flagship podcast Equity. Max believes that Wiener’s new bill has a better shot of becoming law, partly because of that big company focus, and also because it has been endorsed by AI company Anthropic.
Read a preview of our conversation about AI safety and state-level legislation below. I have edited the transcript for length, clarity, and to make us sound slightly smarter.
Max explained why people should care about AI safety legislation passing in California. We are entering an era where AI companies are becoming the most powerful companies in the world, and this could be one of the few checks on their power. This bill is much narrower than SB 1047, which got a lot of pushback last year. But Max believes SB 53 still puts some meaningful regulations on the AI labs. It makes them publish safety reports for their models. If they have an incident, it forces them to report that to the government. It also gives employees at these labs a channel to report concerns to the government without facing pushback from the companies, even if they have signed NDAs. To Max, this feels like a potentially meaningful check on tech companies’ power, something we have not really had for the last couple of decades.
Kirsten added that it matters that this is happening at the state level specifically in California. Every major AI company is pretty much based there or has a major footprint in the state. It is a hub of AI activity. Her question for Max was whether the new bill, with its exceptions and carve outs, is more complicated than the previous one.
Max responded that in some ways, yes. The main carve out in this bill is that it really tries not to apply to small startups. One of the main controversies around the last legislative effort was that people said it could harm the startup ecosystem, which is a booming part of California’s economy. This bill specifically applies to AI developers generating more than $500 million from their AI models. This really tries to target big companies like OpenAI and Google DeepMind, and not the run-of-the-mill startup.
As I understand it, if you are a smaller startup, you do have to share some safety information, but not nearly as much. It is also worth talking about the broader landscape around AI regulation. One of the big changes between last year and this year is that we have a new president. The federal administration has taken much more of a stance of no regulation, believing companies should be able to do what they want. To that extent, they have included language in funding bills saying states cannot have their own AI regulation. None of that has passed so far, but potentially they could try to get that through in the future. So this could be another front in which the Trump administration and blue states are fighting.

