Anthropic CEO Dario Amodei published a statement on Tuesday to set the record straight on the company’s alignment with Trump administration AI policy. He was responding to what he called a recent uptick in inaccurate claims about Anthropic’s policy stances. Amodei wrote that Anthropic is built on a simple principle: AI should be a force for human progress, not peril. He stated that this means making products that are genuinely useful, speaking honestly about risks and benefits, and working with anyone serious about getting this right.
Amodei’s response comes after criticism from AI leaders and top members of the Trump administration, including AI czar David Sacks and White House senior policy advisor for AI Sriram Krishnan. They accused the AI giant of stoking fears to damage the industry. The first criticism came from Sacks after Anthropic co-founder Jack Clark shared his hopes and appropriate fears about AI. Clark described AI as a powerful, mysterious, and somewhat unpredictable creature, not a dependable machine that is easily mastered and put to work.
Sacks responded by claiming Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. He said the company is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem. California Senator Scott Wiener, author of AI safety bill SB 53, defended Anthropic. He called out President Trump’s effort to ban states from acting on AI without advancing federal protections. Sacks then doubled down, claiming Anthropic was working with Wiener to impose the Left’s vision of AI regulation.
Further commentary ensued, with anti-regulation advocates like Groq COO Sunny Madra saying that Anthropic was causing chaos for the entire industry by advocating for a modicum of AI safety measures instead of unfettered innovation.
In his statement, Amodei said managing the societal impacts of AI should be a matter of policy over politics. He stated his belief that everyone wants to ensure America secures its lead in AI development while also building tech that benefits the American people. He defended Anthropic’s alignment with the Trump administration in key areas of AI policy and provided examples of times he personally worked with the president.
For example, Amodei pointed to Anthropic’s work with the federal government. This includes the firm’s offering of Claude to the federal government and Anthropic’s two hundred million dollar agreement with the Department of Defense, which Amodei called the Department of War, echoing Trump’s preferred terminology. He also noted that Anthropic publicly praised Trump’s AI Action Plan and has been supportive of Trump’s efforts to expand energy provision to win the AI race.
Despite these shows of cooperation, Anthropic has faced criticism from industry peers for stepping outside the Silicon Valley consensus on certain policy issues. The company first drew criticism from Silicon Valley-linked officials when it opposed a proposed ten-year ban on state-level AI regulation, a provision that faced widespread bipartisan pushback. Many in Silicon Valley, including leaders at OpenAI, have claimed that state AI regulation would slow down the industry and hand China the lead.
Amodei countered that the real risk is that the U.S. continues to fill China’s data centers with powerful AI chips from Nvidia. He added that Anthropic restricts the sale of its AI services to China-controlled companies despite revenue hits. Amodei said there are products Anthropic will not build and risks it will not take, even if they would make money.
Anthropic also fell out of favor with certain power players when it supported California’s SB 53, a light-touch safety bill that requires the largest AI developers to make frontier model safety protocols public. Amodei noted that the bill has a carve-out for companies with annual gross revenue below five hundred million dollars, which would exempt most startups from any undue burdens.
Amodei addressed the suggestion that Anthropic is interested in harming the startup ecosystem. He wrote that startups are among Anthropic’s most important customers. The company works with tens of thousands of startups and partners with hundreds of accelerators and venture capital firms. He stated that Claude is powering an entirely new generation of AI-native companies, and damaging that ecosystem makes no sense for Anthropic.
In his statement, Amodei said Anthropic has grown from a one billion to a seven billion dollar run-rate over the last nine months while managing to deploy AI thoughtfully and responsibly. He wrote that Anthropic is committed to constructive engagement on matters of public policy. When the company agrees, it says so, and when it does not, it proposes an alternative for consideration. Amodei concluded that Anthropic will keep being honest and straightforward and will stand up for the policies it believes are right, stating the stakes of this technology are too great to do otherwise.

