The race to regulate AI has sparked a federal vs state showdown

For the first time, Washington is moving closer to deciding how to regulate artificial intelligence. The emerging debate is not about the technology itself, but about who gets to control its regulation. In the absence of a meaningful federal AI standard focused on consumer safety, states have introduced dozens of bills to protect residents from AI-related harms. Examples include California’s AI safety bill and Texas’s Responsible AI Governance Act, which prohibits the intentional misuse of AI systems.

Technology companies from Silicon Valley argue that such state laws create an unworkable patchwork of regulations that threatens innovation. They claim it will slow the United States in its technological race against China. The industry, along with several of its allies in the White House, is pushing for a single national standard or no regulation at all. Within this all-or-nothing battle, new efforts have emerged to prohibit states from enacting their own AI legislation.

House lawmakers are reportedly trying to use the National Defense Authorization Act to block state AI laws. At the same time, a leaked draft of a White House executive order also shows strong support for preempting state efforts to regulate AI. However, a sweeping preemption that would remove states’ rights to regulate AI is unpopular in Congress, which voted overwhelmingly against a similar measure earlier this year. Lawmakers argue that without a federal standard in place, blocking states would leave consumers exposed to harm and allow tech companies to operate without oversight.

To create a national standard, Representative Ted Lieu and the bipartisan House AI Task Force are preparing a package of federal AI bills. This package covers a range of consumer protections, including fraud, healthcare, transparency, child safety, and catastrophic risk. Such a comprehensive bill will likely take months, if not years, to become law, which underscores why the current rush to limit state authority has become one of the most contentious fights in AI policy.

Efforts to block states from regulating AI have intensified in recent weeks. The House has considered including language in the National Defense Authorization Act that would prevent states from regulating AI. Congress was reportedly working to finalize a deal on the defense bill. A source familiar with the matter said negotiations have focused on narrowing the scope to potentially preserve state authority over areas like child safety and transparency.

Meanwhile, a leaked White House executive order draft reveals the administration’s own potential preemption strategy. The order, which has reportedly been put on hold, would create an AI Litigation Task Force to challenge state AI laws in court. It would direct agencies to evaluate state laws deemed onerous and push federal commissions toward national standards that override state rules. Notably, the order would give David Sacks, a venture capitalist and the administration’s AI and Crypto Czar, co-lead authority on creating a uniform legal framework. This would give Sacks direct influence over AI policy. Sacks has publicly advocated for blocking state regulation and keeping federal oversight minimal, favoring industry self-regulation to maximize growth.

Sacks’s position mirrors the viewpoint of much of the AI industry. Several pro-AI super PACs have emerged in recent months, spending hundreds of millions of dollars in local and state elections to oppose candidates who support AI regulation. One such PAC has raised more than one hundred million dollars and recently launched a campaign pushing Congress to craft a national AI policy that overrides state laws. A representative for the PAC argued that you cannot have laws popping up from people who lack technical expertise and that a patchwork of regulations will slow the race against China. The executive director of the PAC’s advocacy arm confirmed the group supports preemption even without federal consumer protections in place, arguing that existing laws are sufficient to handle AI harms and favoring a reactive approach where problems are addressed in court after they occur.

In contrast, a New York Assembly member running for Congress is one of the targets of these PACs. He sponsored a state act that requires large AI labs to have safety plans to prevent critical harms. He believes in the power of AI and argues that is why reasonable regulations are important. He supports a national AI policy but contends that states can move faster to address emerging risks. It is true that states move quicker; as of recently, thirty-eight states have adopted more than one hundred AI-related laws this year, mainly targeting deepfakes, transparency, and government use of AI. A recent study found that most of those laws impose no requirements on AI developers.

Activity in Congress provides more evidence of the slower pace at the federal level. Hundreds of AI bills have been introduced, but very few have passed. More than two hundred lawmakers signed an open letter opposing preemption in the defense bill, arguing that states serve as laboratories of democracy that must retain flexibility to confront new digital challenges. Nearly forty state attorneys general also sent an open letter opposing a ban on state AI regulation.

Cybersecurity experts argue the patchwork complaint is overblown. They note that AI companies already comply with tougher European Union regulations, and most industries find a way to operate under varying state laws. They suggest the real motive for opposing state laws is avoiding accountability.

Regarding a potential federal standard, Representative Lieu is drafting a megabill he hopes to introduce soon. It covers a range of issues like fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for large AI companies. That last provision would require AI labs to test their models and publish results, which most do voluntarily now. Lieu said his bill would not direct federal agencies to review AI models directly, unlike a similar bill introduced in the Senate which would require a government-run evaluation program for advanced AI systems before deployment. Lieu acknowledged his bill would not be as strict, but he said it had a better chance of becoming law, noting the political challenge of passing any regulation through a Republican-controlled government.