Itdaily - OpenAI secures guarantees Anthropic sought in messy Pentagon contract takeover

OpenAI secures guarantees Anthropic sought in messy Pentagon contract takeover

OpenAI secures guarantees Anthropic sought in messy Pentagon contract takeover

After showing Anthropic the door and embracing OpenAI, the Pentagon is granting the latter AI specialist the very guarantees the former so desperately sought. Nevertheless, Anthropic officially remains a “supply chain risk.” The message is clear: those who do not listen to the government will face the consequences.

OpenAI has amended its brand-new agreement with the Pentagon. CEO Sam Altman himself admits that the deal struck late last week appeared “opportunistic and messy.” In the updated version, OpenAI is granted several guarantees.

For instance, the contract now includes terms intended to prevent OpenAI’s AI systems from being used for the deliberate mass surveillance of American citizens. The NSA, notorious for the mass surveillance of Americans, does not currently have unfettered access to OpenAI’s GPT models. Furthermore, Altman and his team have reportedly secured guarantees prohibiting the use of AI for autonomous weapons systems.

Anyone following this story might be left scratching their head. Let’s walk through it together:

The row begins

Two weeks ago, a row broke out between Anthropic and the U.S. Department of Defense. At the time, Anthropic’s Claude was the only AI model approved to run on the Pentagon’s classified systems. However, Anthropic had imposed conditions on its use: Claude was not to be deployed for the mass surveillance of American citizens or for autonomous weapons systems.

Defense Secretary Pete Hegseth could not stomach this. He argued that the Pentagon only needed to comply with the law. The fact that U.S. law currently says very little about the use of AI in such scenarios was a convenient detail. Anthropic was told to drop the contract conditions but refused.

The Pentagon then began threatening Anthropic: if the conditions remained, Anthropic would be labeled a “supply chain risk.” This designation is typically reserved for companies linked to hostile foreign powers. The label prohibits any company doing business with the Pentagon from partnering with Anthropic. Furthermore, Anthropic would lose its own contract with the Pentagon.

More threats

Last week, Hegseth added fuel to the fire. Anthropic was given a deadline to cave to the demands. CEO Dario Amodei reiterated his willingness to be flexible, but maintained two red lines: a ban on the mass surveillance of American citizens and on autonomous weapons systems.

Last Friday, the deadline passed. Hegseth was so intent on a contract without conditions regarding the mass surveillance of Americans and autonomous weapons systems that he stood his ground. Anthropic is now officially a risk company.

OpenAI’s Opportunism

This was problematic for the Pentagon, as it had lost its only AI provider. Fortunately for Hegseth, Sam Altman stepped into the breach: a contract was quickly drawn up for ChatGPT to replace Claude.

The deal appeared opportunistic, to say the least. This view is shared not only by users switching to alternatives in droves but also by Altman himself. He acknowledged that negotiations might have moved a bit too quickly. “The issues are likely super complex,” he now reflects, referring to mass surveillance and the potential for creating AI killer robots. “Clear communication is necessary,” he remarked after a brief period of reflection.

The Return of the Conditions

Which brings us to today. Altman is perfectly willing to help the Pentagon, but he has two red lines: a ban on the mass surveillance of American citizens and on autonomous weapons systems.

Attentive readers might expect OpenAI to receive its own “supply chain risk” badge, but that is not the case. The existing contract—which Altman himself called messy—is being amended with new conditions. In this amendment, the Pentagon agrees not to use ChatGPT for the mass surveillance of American citizens or the development of autonomous weapons systems.

What happened?

After two weeks of threats, the Pentagon has effectively tarred and feathered its trusted AI provider and shown them the door. Government agencies are now required to distance themselves from the U.S. company en masse. A new major American AI firm has stepped in, only to ultimately secure the exact same conditions.

As far as the Pentagon’s use of AI is concerned, nothing has changed. Economically and ethically, however, there has been a seismic shift. The U.S. government has demonstrated that it will not hesitate to attack domestic companies with every tool at its disposal if it does not get its way.

Furthermore, by allowing OpenAI to secure these very conditions, Hegseth has shown that the government is willing to stoop to personal vendettas. Anyone attempting to uphold an independent ethical code in the face of the government must face the consequences. In just two weeks, the environment surrounding the world’s most critical AI development has taken on an air of intimidation, fear, and opportunism.