OpenAI adjusts its strategy and opts for AMD chips alongside Nvidia to distribute AI workload more efficiently
OpenAI may be working on its own AI hardware and has engaged Broadcom to develop custom chips. These must handle large AI workloads. The partnership gives OpenaI access to TSMC’s secure and highly advanced factories, Reuters reports. These chips are expected to go into production in 2026.
AMD strengthens OpenAI’s infrastructure
OpenAI remains one of Nvidia’s largest customers, but due to shortages and rising costs, the company is also deploying AMD’s MI300X chips. With this strategic shift, OpenAI joins other technology companies, such as Microsoft and Meta, that are trying to reduce their dependence on Nvidia. This diversification is much needed for OpenAI because of its high compute costs, which are optimized this way.
Despite its partnership with AMD and Broadcom, OpenAI continues to invest in its relationship with Nvidia, including getting to work on Nvidia’s latest Blackwell chips. This will allow it to continue to train AI models such as ChatGPT. However, competitors such as Google, Microsoft and Amazon are already several years further along in their chip development, so OpenAI may need more funding to become a full-fledged chip producer.
Good news for AMD
The interest in AMD hardware is good news for AMD itself. That wants to compete with Nvidia, but is struggling with a chicken and egg story: Nvidia is the biggest, so has the most extensive ecosystem, so is the most popular, so remains the biggest. When parties like OpenAI also embrace AMD’s Instinct accelerators alongside Nvidia’s chips, they become more popular and the ecosystem around them becomes more mature. That, in turn, could spur further adoption.