The U.S. Pentagon is banning Anthropic following a conflict over ethical usage limits for its AI models. OpenAI is walking away with the deal.
As of Friday evening, OpenAI can call itself the new exclusive AI partner of the Pentagon, the U.S. Department of Defense/War. CEO Sam Altman announced this himself via X. The Pentagon will deploy OpenAI’s AI models ‘within its classified network.’ Altman’s announcement came just hours after competitor Anthropic was banned.
Anthropic banned
Until recently, Anthropic was the Department’s preferred AI provider. However, a disagreement over ethical guidelines proved to be an irreconcilable breaking point in the commercial relationship. Anthropic draws a hard line against using its AI models for mass surveillance of citizens or the development of autonomous weapons. The U.S. government, however, maintained that it is not up to a company to determine those boundaries.
Anthropic was given until Friday to concede to the government’s demands. No compromise was reached, and so the Pentagon is ruthlessly cutting all ties. Pentagon leader and former Fox News anchor Pete Hegseth stated that Anthropic is now also labeled a ‘national security risk.’
This means that Anthropic products and services are now prohibited across all government agencies. President Donald Trump is imposing an ‘immediate ban.’ Furthermore, in principle, all companies providing services to the Pentagon may no longer use Anthropic according to the strict government guidelines for suppliers. President Trump stated on his platform Truth Social that he will use ‘all presidential power’ to enforce compliance.
OpenAI profits
‘One man’s loss is another man’s gain,’ they must have thought at OpenAI. Anthropic’s departure rolled out the red carpet for OpenAI to secure the Pentagon contract. Just hours after Anthropic’s deadline expired, Altman announced the new partnership.
The announcement is shrouded in the typical vagueness associated with military contracts. The Pentagon will deploy OpenAI’s models in its ‘classified’ network. For its part, OpenAI will take ‘technical measures’ to ensure its models do not commit mutiny. The company is also temporarily embedding engineers at the Department to ensure a smooth transition.
Altman assures that his models will not be used for surveillance or autonomous weapons. This promise is reportedly included in the agreement. “The Department of Defense showed a deep respect for safety and reflects this in legislation and policy,” Altman writes. The fact that the dispute with Anthropic was precisely because Anthropic refused to provide those capabilities raises more questions than are currently being answered.
Dangerous precedent
Anthropic is not backing down without a fight. In a public statement, Anthropic calls the Pentagon’s decision ‘legally incorrect’ and a ‘dangerous precedent’ for companies working with the government. Anthropic assures that Claude users will not experience any impact.
The U.S. government is clearly setting the tone for American AI companies. Those who follow a different moral compass are ruthlessly penalized. Since the center of gravity of the AI market lies in the United States and American AI tools are also used in Europe, this is more than just a domestic matter.
