After a successful first attempt, AMD announces new Instinct accelerators. The MI325X should run the gauntlet against the Nvidia H200; coarser artillery is needed for Blackwell.
AMD wants to assert itself as a data center specialist at its conference in San Francisco. We already wrote at length about the new Epyc Turin server processors, but AMD realizes that good CPUs alone are not enough to conquer the data center. In fact, there is a certain company that has taken the AI industry by storm with its GPUs.
Last year, to break Nvidia’s hegemony, AMD launched the Instinct MI300X, an “accelerator” for AI workloads on servers. A successful launch, as AMD immediately managed to capture seven percent of the AI chip market, although the claims of “AI leadership” proclaimed by the company’s top executives may still be a bit overblown. AMD wants to prove that the MI300X was no fluke and announced its successor Instinct MI325X in San Francisco.
read also
AMD releases ‘server beast’ Epyc Turin into AI arena
Duel with Nvidia H200
Whereas the MI300X was up against the Nvidia H100, the MI325X is pushed into a duel with the H200. The accelerator is developed on the same CDNA 3 architecture. The MI325X offers 256 GB of HBM3e memory 6 TB/s bandwidth. With that, you get up to 1.3 times higher performance for inference with AMD’s accelerator than with Nvidia’s direct counterpart. Intel’s Gaudi 3 is not even mentioned.
The MI325X will roll off the line starting in the fourth quarter, with wider availability through partners starting in the first quarter of next year. AMD is not communicating pricing at this time. A single Nvidia H200 easily sells for tens of thousands of dollars, so interested parties can start saving up ahead of time.

Washed up against Blackwell
AMD can’t sit still. Nvidia Blackwell is coming, despite production issues causing some delays. The MI325X will not make it in a direct duel with the Blackwell chips, so AMD is already hinting at how it plans to stay competitive. The MI accelerators will get a complete redesign in 2025.
AMD then plans to switch from CDNA 3 to CDNA 4. The MI350 series with the updated architecture is expected in the second half of 2025. AMD promises up to 288 GB HBM3e per chip and up to 35 times better inference performance. AMD engineers are not venturing into a direct comparison with Blackwell yet, but they do promise that the accelerator will be “competitive.
Starting next year, AMD also wants to be on an annual architecture cadence, similar to what Nvidia also proclaimed. That means that by 2026, the next CDNA generation is already in the pipeline, tentatively christened CDNA Next.