Anthropic will use Google Cloud TPUs to train and run Claude.
Google and Anthropic have signed a multi-year deal that experts estimate could be worth tens of billions of dollars. The deal gives Anthropic access to one million Google Cloud TPUs (tensor processing units). These are the specialized AI chips the company uses to train and run its Claude models.
Expensive Computing Power
Anthropic says it chose Google’s TPUs because of their low power consumption and price-performance ratio. By 2026, Anthropic will have access to more than one gigawatt of computing power, with an estimated cost of $50 billion per year.
AI companies, like OpenAI, have signed multiple deals with companies this year. They need access to data center resources to train their LLMs. Anthropic chooses Google’s TPUs for their price-performance ratio and because they reportedly consume little power.
Competition with AWS
Until recently, Amazon was the regular partner for processors, but now Anthropic is also deploying Google’s TPU chips. Amazon has already invested $8 billion in Anthropic, compared to Google’s $3 billion, and supplies the Trainium 2 chips that run Project Rainier.
Anthropic states that it “remains committed to working with Amazon,” writes SiliconANGLE. The company wants to combine AWS Trainium chips, Google TPUs, and Nvidia GPUs to spread risks and continue guaranteeing availability. This approach paid off this week when Claude stayed online during a major AWS outage.
