Hundreds of tech workers have signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a supply chain risk. The letter also calls on Congress to examine whether the use of these extraordinary authorities against an American technology company is appropriate.
The letter includes signatories from major technology and venture capital firms including OpenAI, Slack, IBM, Cursor, and Salesforce Ventures. It follows a dispute between the DOD and Anthropic after the AI lab refused to give the military unrestricted access to its AI systems.
Anthropic’s two red lines in its negotiations with the Pentagon were that it didn’t want its technology to be used for mass surveillance on Americans or to power autonomous weapons that made targeting and firing decisions without a human in the loop. The DOD said it had no plans to do either of those things, but that it didn’t believe it should be limited by the rules of a vendor.
After Anthropic CEO Dario Amodei declined to reach an agreement with Defense Secretary Pete Hegseth, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six-month transition period. Hegseth then moved to designate Anthropic a supply chain risk, a designation normally reserved for foreign adversaries that would blacklist the AI firm from working with any agency or company that does business with the Pentagon.
In a post, Hegseth wrote that effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. However, a post does not automatically make Anthropic a supply chain risk. The government needs to complete a risk assessment and notify Congress before military partners have to cut ties with Anthropic or its products.
Anthropic said the designation is both legally unsound and that it would challenge any supply chain risk designation in court. Many in the industry see the administration’s treatment of Anthropic as harsh and clear retaliation.
The open letter argues that when two parties cannot agree on terms, the normal course is to part ways and work with a competitor. It states this situation sets a dangerous precedent, punishing an American company for declining to accept changes to a contract and sending a clear message to every technology company in America to accept whatever terms the government demands or face retaliation.
Beyond concern over the government’s harsh treatment of Anthropic, many in the industry are still concerned about potential government overreach and use of AI for nefarious purposes. An OpenAI researcher wrote in a social media post that blocking governments from using AI to do mass surveillance is also his personal red line and it should be all of ours.
Moments after Trump publicly attacked Anthropic, OpenAI announced it had reached a deal of its own for its models to be deployed in the DOD’s classified environments. OpenAI CEO Sam Altman said last week that the firm has the same red lines as Anthropic.
The researcher added that if anything good can come out of the events of the last week, it would be if the AI industry starts treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right. He noted the industry has done a good job on risks such as bioweapons and cyber security and should use similar processes here.

