CEO Sam Altman acknowledged that OpenAI’s deal with the Department of Defense was “definitely rushed,” and that “the optics don’t look good.” This announcement came swiftly after a series of events involving their competitor, Anthropic. After negotiations between Anthropic and the Pentagon collapsed, President Donald Trump directed federal agencies to stop using Anthropic’s technology following a six-month transition. Secretary of Defense Pete Hegseth also stated he was designating Anthropic as a supply-chain risk.
OpenAI then announced its own deal for models to be deployed in classified environments. With Anthropic stating it had drawn red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman asserting OpenAI had the same red lines, obvious questions emerged. Was OpenAI being honest about its safeguards? Why was it able to reach a deal while Anthropic was not?
As OpenAI executives defended the agreement on social media, the company published a blog post outlining its approach. The post specified three areas where it said its models cannot be used: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions like “social credit” systems. The company stated that, unlike other AI companies that rely primarily on usage policies, OpenAI’s agreement protects its red lines through a more expansive, multi-layered approach. This includes retaining full discretion over its safety stack, deploying via cloud, having cleared OpenAI personnel in the loop, and maintaining strong contractual protections, all in addition to existing U.S. law.
The company added, “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”
After the post was published, commentator Mike Masnick claimed the deal “absolutely does allow for domestic surveillance,” because it states the collection of private data will comply with Executive Order 12333, among other laws. Masnick described that order as the legal basis that allows agencies like the NSA to conduct surveillance by tapping communications outside the U.S. even if they involve U.S. persons.
In response, OpenAI’s head of national security partnerships, Katrina Mulligan, argued that much of the discussion incorrectly assumes a single contract provision is the only safeguard. She stated that deployment architecture matters more than contract language, and by limiting deployment to cloud API, OpenAI can ensure its models are not integrated directly into weapons systems or surveillance hardware.
Altman also addressed questions about the deal, admitting it had been rushed and resulted in significant backlash, to the extent that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store. When asked why OpenAI proceeded, Altman said, “We really wanted to de-escalate things, and we thought the deal on offer was good.” He concluded, “If we are right and this does lead to a de-escalation between the DoD and the industry, we will look like geniuses, and a company that took on a lot of pain to help the industry. If not, we will continue to be characterized as rushed and uncareful.”

