In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through. The Trump administration then designated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court.
OpenAI, meanwhile, quickly announced a deal of its own. This prompted a backlash that saw users uninstalling ChatGPT and pushed Anthropic’s Claude to the top of the App Store charts. At least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon. Kirsten wondered if we will see a changing of the tune.
Sean pointed out that this is an unusual situation. In part, it is because OpenAI and Anthropic make products that are widely used and discussed. Crucially, this is a dispute over how their technologies are being used or not being used in matters of life and death, so it naturally draws more scrutiny.
Still, Kirsten argued this is a situation that should give any startup pause. She questioned whether other startups are now looking at what happened between Anthropic and the Pentagon and reconsidering their pursuit of federal dollars.
Sean considered that question. He thinks not, at least in the near term. Many companies, from startups to Fortune 500 firms, work with the Department of Defense on projects that fly under the radar. General Motors, for example, has long made defense vehicles for the Army, including electric and autonomous versions. That work rarely hits the zeitgeist.
The problem for OpenAI and Anthropic is the intense spotlight on them. Their products are ubiquitous and constantly talked about. This naturally highlights their government involvement to a level most other contractors do not face. The caveat, Sean added, is that the heat here is specifically about the use of AI in lethal missions. That adds an extra, more visceral element compared to a company like General Motors being a defense contractor. He does not expect other dual-use tech companies to back off, largely due to the lack of similar public spotlight and shared understanding of the impact.
This story is unique and specific to these companies and personalities in many ways. There are worthwhile questions about the role of AI in government. However, Anthropic and OpenAI are not that different in their stances. Both companies, at least publicly, say they want restrictions on how their AI is used. It seems Anthropic is simply digging in its heels more about the terms not being changed.
On top of that, there appears to be a personality layer. The CEO of Anthropic and Emil Michael, who is now the chief technology officer for the Department of Defense, reportedly just do not like each other.
There is a personal conflict element, but the implications are stronger than that. The core issue is the Pentagon and Anthropic entered a dispute which Anthropic appears to have lost, though their technology is still considered crucial. OpenAI stepped in, and the situation remains fluid.
The blowback for OpenAI has been significant, with a surge in ChatGPT uninstalls after their Defense Department deal. Beyond that noise lies the critical issue: the Pentagon sought to change the existing terms on an existing contract. This is important and should give any startup pause. The current political machine at the Pentagon appears different. This is not normal. Government contracts take a long time to finalize, and the attempt to alter those terms is a problem.

