New court filing reveals Pentagon told Anthropic the two sides were nearlyaligned — a week after Trump declared the relationship kaput

Anthropic submitted two sworn declarations to a California federal court late Friday afternoon. These documents push back on the Pentagon’s assertion that the AI company poses an unacceptable risk to national security. Anthropic argues that the government’s case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute.

The declarations were filed alongside Anthropic’s reply brief in its lawsuit against the Department of Defense. This comes ahead of a hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco.

The dispute traces back to late February, when the President and Defense Secretary publicly declared they were cutting ties with Anthropic. This occurred after the company refused to allow unrestricted military use of its AI technology.

The two people who submitted the declarations are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector.

Heck is a former National Security Council official who worked at the White House under the Obama administration. She later moved to Stripe and then to Anthropic, where she runs the company’s government relationships and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and the Pentagon’s Under Secretary Emil Michael.

In her declaration, Heck calls out what she describes as a central falsehood in the government’s filings. She denies the claim that Anthropic demanded any kind of approval role over military operations. She states that at no time during negotiations did any Anthropic employee state that the company wanted that kind of role.

She also points out that the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during negotiations. Instead, she says, it appeared for the first time in the government’s court filings, which gave Anthropic no opportunity to respond.

Another detail in Heck’s declaration is that on March 4, the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic, Under Secretary Michael emailed Amodei. In the email, Michael stated the two sides were very close on the two issues the government now cites as evidence that Anthropic is a national security threat. These issues are its positions on autonomous weapons and mass surveillance of Americans.

This email is attached as an exhibit to her declaration. Heck contrasts this private communication with public statements made afterward. On March 5, Amodei published a statement saying the company had been having productive conversations with the Pentagon. The following day, Michael posted publicly that there was no active Department of War negotiation with Anthropic. A week later, he told a news outlet there was no chance of renewed talks.

Heck’s point appears to be that if Anthropic’s stance on those two issues makes it a national security threat, why was the Pentagon’s own official saying the two sides were nearly aligned on those same issues right after the designation was finalized.

Ramasamy brings a different kind of expertise to the case. Before joining Anthropic, he spent six years at Amazon Web Services managing AI deployments for government customers, including in classified environments. At Anthropic, he built the team that brought its Claude models into national security and defense settings, including a substantial contract with the Pentagon announced last summer.

His declaration addresses the government’s claim that Anthropic could theoretically interfere with military operations by disabling or altering its technology. Ramasamy states this is not technically possible. He explains that once Claude is deployed inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no access to it. He says there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. He suggests any kind of operational veto is a fiction, explaining that a change to the model would require the Pentagon’s explicit approval and action to install.

Ramasamy states that Anthropic cannot see what government users are typing into the system, let alone extract that data.

He also disputes the government’s claim that Anthropic’s hiring of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone U.S. government security clearance vetting, the same process required for access to classified information. He adds that to his knowledge, Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments.

Anthropic’s lawsuit argues that the supply-chain risk designation, the first ever applied to an American company, amounts to government retaliation for the company’s publicly stated views on AI safety. It claims this violates the First Amendment.

The government, in a filing earlier this week, rejected that framing entirely. It stated that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech. The government maintains the designation was a straightforward national security call and not punishment for the company’s views.