OpenAI Taps Broadcom to Build its Own AI Accelerators

openai

OpenAI is partnering with Broadcom to develop custom AI accelerators and network systems. The rollout of ten gigawatts of capacity will commence in the second half of 2026 and continue until the end of 2029.

OpenAI and Broadcom will collaborate on the development of custom AI accelerators and network systems. The two companies aim to roll out ten gigawatts of accelerator capacity by 2029. OpenAI will be responsible for the design of the chips and systems, while Broadcom will handle manufacturing and the delivery of racks and network components.

In-house Chip Design

The AI accelerators will be designed by OpenAI. This way, the company aims to integrate insights from the development of its AI models into the hardware. This should ensure better performance and more control over its own infrastructure landscape. The custom systems will utilize Ethernet scale-out technology and other network solutions from Broadcom.

This collaboration builds upon previous agreements between the two companies. Based on a signed letter of intent, Broadcom will supply racks that combine OpenAI’s custom accelerators with its own network solutions. Delivery will commence in the second half of 2026 and is expected to be completed by the end of 2029. The systems will be deployed in OpenAI data centers and at colocation partners.

read also

OpenAI Embraces AMD with Billion-Dollar Investment in 6 Gigawatts of GPUs

The choice of Ethernet as network technology aligns with the broader ambition of both companies to build scalable and energy-efficient AI infrastructure. The racks will contain components from Broadcom’s portfolio, including Ethernet, PCIe, and optical connectivity solutions.

OpenAI is not putting all its eggs in one basket. The collaboration with Broadcom is just one pillar of the company’s future AI infrastructure. OpenAI recently invested in AMD with the aim of purchasing six gigawatts of accelerator capacity. Nvidia is also a key supplier to OpenAI.