AWS shows Project Rainer: HPC cluster tailored for Anthropic

AWS shows Project Rainer: HPC cluster tailored for Anthropic

AWS is showcasing Project Rainer at Re:Invent. That is an HPC cluster for AI workloads built from proprietary chips. The sytem should help OpenAI competitor Anthropic develop models.

At Re:Invent in Las Vegas, AWS is showing Project Rainer to the general public. Project Rainer is an HPC supercluster built from hundreds of thousands of self-developed Trainium 2 chips. The original name of those components already gives away that they are intended for AI training workloads.

Project Rainer is divided into Trn2 Ultraservers. These are servers consisting of 64 Trainium 2 chips. Each chip has 96 gibytes of HBM memory and eight NeuronCores. Together, these ensure that an Ultraserver brings 332 petaflops of FP8 computing power to the battlefield.

AWS is virtually copying the Ultraservers together. The hardware for the Project Rainer supercluster is spread across data centers in different locations. In this way, AWS wants to guarantee that there is enough power in stock to power the whole thing.

Higher latency

On the other hand, Project Rainer does not have spectacularly low latency. The cloud provider developed proprietary networking technology called Elastic Fabric Adapter to offset that downside somewhat. Elastic Fabric Adapter ensures that data traffic does not have to pass by the OS, which improves overall communication speed in the cluster.

Poject Rainer is not yet finished. AWS expects to finish the cluster next year. When that happens, the HPC cluster will become the largest in the world for training AI models. OpenAI competitor Anthropic can then use it. That company will thus have at its disposal five times more computing power than today for working out its models.

AWS is investing heavily in Anthropic, seeking to counterbalance the Microsoft-OpenAI tandem with the partnership. Microsoft also supports OpenAI with both monetization and brute-force computing power in Azure.