The U.S. Department of Defense wants to categorize AI specialist Anthropic as a risk company because it isn’t getting its way. The move would impact Pentagon suppliers worldwide and primarily sends a signal to other American AI companies.
The U.S. Department of Defense (or Department of War, according to the Trump administration) wants to label AI specialist Anthropic as a ‘supply chain risk.’ Axios reports this. Defense Secretary Pete Hegseth is reportedly on the verge of making the final decision. If Anthropic is designated as a risk company, it will have far-reaching consequences. Not only the Pentagon itself, but all suppliers worldwide, would then have to remove Anthropic’s AI from their systems.
Not really a risk
Hegseth’s plans have nothing to do with any real risk posed by Anthropic or its AI model, Claude. On the contrary: Claude is currently the first and only AI model authorized for use on U.S. defense classified systems. The AI model reportedly played a role in the kidnapping of Venezuelan President Maduro on January 3 last year.
However, the Pentagon is bothered by the restrictions Anthropic wants to impose. While the AI company is willing to tailor its terms and conditions for defense, it refuses to give carte blanche for Claude’s use. For instance, Anthropic CEO Dario Amodei wants to prevent Claude from being used for large-scale spying on American citizens or for developing weapons that can fire autonomously (without some form of human intervention).
Inadequate legislation
Hegseth calls the conditions too restrictive and wants the military to be able to use AI for all legally permitted purposes. In effect, he does not want Anthropic to attach its own conditions to the delivery of its AI services.
Anthropic points out that legislation is lagging behind AI capabilities. For example, the U.S. can already pull so-called open-source information from the internet—think of publicly accessible social media posts and photos. AI makes it possible to analyze all that information on a large scale and, for instance, quickly identify who is critical of the government and lives near a military base.
Intimidation
It appears the Pentagon wants to intimidate Anthropic, as well as other AI players, with this threat. Negotiations for new contracts with OpenAI, Google, and xAI are currently underway. The threat immediately sets the tone for those talks.
Initially, the impact of severing ties between Anthropic and the Pentagon would be limited. The current contract is worth $200 million, but Anthropic raises about $14 billion annually. The secondary consequences are more severe. The label Hegseth wants to apply is typically reserved for hostile foreign entities. It also prohibits suppliers worldwide from doing business with a sanctioned company.
The Pentagon strictly monitors the digital behavior of its suppliers, including through the new CMMC guidelines. European companies could also experience disruption through direct or indirect ties to the U.S. Department of Defense.
The end of ethical AI?
It is primarily the tertiary implications that could be impactful in the long run. The U.S. government is flexing its muscles, showing major AI companies the potential consequences of trying to apply their own ethical and moral codes when providing services. Not only does doing business with the Pentagon become impossible, but the U.S. will do everything in its power to cause a broad negative impact.
Hegseth’s move demonstrates how difficult it is to develop AI according to ethical guidelines within the current climate in the U.S. This is bad news for ethical AI in general, as the center of gravity for AI development currently lies with large American companies vying for lucrative defense contracts.
Self-regulation at most companies is already mediocre. OpenAI, for example, has taken the path of advertising, threatening to prioritize ad revenue over quality. That Elon Musk’s xAI plays fast and loose with ethics needs little explanation. The Grok model happily created sexually explicit deepfakes and is being investigated by the EU for that reason, among others. As a major player, Anthropic still adheres to an ethical code for now and, for that reason, refuses to link advertisements to chats.
Low bar
The bar is low: all models are largely trained on data from people who did not give their consent. Furthermore, in the case of the agreement between Anthropic and the Pentagon, Amodei is willing to make significant compromises. Yet even that is not enough.
As a result, the center of gravity for AI development today lies in a country with little regulation, where AI companies are under pressure to turn a quick profit after massive investments. It is no surprise that revenue more easily takes precedence over principles. On top of that, there is now a government that interprets even a minimum of rules—such as ‘no mass spying on its own citizens’ and ‘no AI killer robots’—as an insult.
