AMD Unveils Its Platonic Ideal: An Open and Scalable AI Ecosystem, Supported by Its Own GPUs, Software, and Infrastructure. Partners Such as Meta, OpenAI, and Oracle Are Already on Board.
During the Advancing AI 2025 event, AMD presented its vision for a fully integrated AI platform. Central to this is AMD’s own technology: from the new Instinct MI350 GPUs, an open rack infrastructure, to an update of the ROCm software stack. AMD is reaching out to major AI players to help realize this dream.
The chip manufacturer introduces the latest accelerators from the MI350 series, launching significantly earlier than planned. According to AMD, the MI350X and MI355X chips deliver a fourfold performance gain compared to the previous generation. The MI355X also offers up to 40 percent more tokens per dollar compared to alternative solutions.
Chips, Racks, and Software
The Instinct MI350 is deployed in hyperscale AI infrastructure alongside fifth-generation Epyc processors and Pensando Pollara network cards. Oracle Cloud Infrastructure is one of the first to roll out this technology on a large scale.

AMD also provided a first look at the Helios rack, which will run on MI400 GPUs and new Zen 6 EPYC processors in the future. These are said to deliver up to ten times faster inference performance for models based on Mixture of Experts.
Finally, AMD opened its Developer Cloud to the broader developer and open-source ecosystem. This platform provides access to a managed environment for AI projects with support for ROCm 7. With this, AMD aims to lower the barrier for AI development and stimulate collaboration within an open ecosystem.
Extended Hand
AMD emphasized its open approach by partnering with various players in the AI landscape, and many AI industry giants are accepting this extended hand. Meta uses the Instinct GPUs for the inference of its LLama models. OpenAI runs GPT models on Azure with AMD hardware and collaborates on the development of MI400 platforms. Microsoft deploys the MI300X for its own and open models on Azure.
Other partners include Cohere, which deploys its Command models on AMD hardware, and Red Hat, which uses MI GPUs in OpenShift AI. Additionally, Humain is committed to a broad collaboration with AMD for scalable AI infrastructure. Astera Labs and Marvell are contributing to UALink, an open standard for AI interconnectivity.
Ubiquitous Competitor
With an open ecosystem, AMD aims to further position itself as a challenger to Nvidia. AMD seems to be about the only one capable of doing so, although Nvidia is a significant hurdle to overcome. The past few weeks have once again made clear how ubiquitous Nvidia is in AI data centers. During his European tour, Jensen Huang announced one partnership after another. Every party that praises AMD is just as eager to be seen together with Huang.
read also
AI according to Dell, HPE and Lenovo: same story, available in three colors
In one domain, AMD remains unbeatable: supercomputers. In the newly published Top 500 list, AMD proudly tops the chart with both El Capitan and Frontier. However, Nvidia is never far behind and is quickly climbing to the top with the European Jupiter computer.