OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’

OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models within the department’s classified network. This development follows a high-profile standoff between the Pentagon and OpenAI’s rival, Anthropic. The Department of Defense, also referred to under the Trump administration as the Department of War, had pushed AI companies to allow their models to be used for “all lawful purposes.” Anthropic, however, sought to draw a red line around mass domestic surveillance and fully autonomous weapons.

In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company never raised objections to particular military operations nor attempted to limit use of its technology in an ad hoc manner. He argued that in a narrow set of cases, AI can undermine, rather than defend, democratic values. More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.

After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” in a social media post. He directed federal agencies to stop using the company’s products after a six-month phase-out period. In a separate post, Secretary of Defense Pete Hegseth claimed Anthropic was trying to seize veto power over the operational decisions of the United States military. Hegseth also said he is designating Anthropic as a supply-chain risk, meaning no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with the company.

On Friday, Anthropic said it had not yet received direct communication from the Department of War or the White House on the status of negotiations, but insisted it would challenge any supply chain risk designation in court.

Surprisingly, Altman claimed in a post that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic. He stated that two of OpenAI’s most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. Altman said the Department of War agrees with these principles, reflects them in law and policy, and they are included in the agreement.

Altman said OpenAI will build technical safeguards to ensure its models behave as intended, which the Department also wanted, and will deploy engineers with the Pentagon to help with the models and ensure their safety. He added that OpenAI is asking the Department to offer these same terms to all AI companies, expressing a strong desire to see things de-escalate away from legal actions and towards reasonable agreements.

Reports indicate that Altman told OpenAI employees at a meeting that the government will allow the company to build its own safety stack to prevent misuse, and that if a model refuses a task, the government would not force OpenAI to make it comply. Altman’s post came shortly before news broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian government.