Intel and Google are increasing their focus on Infrastructure Processing Units (IPUs) with a new collaboration.
Intel and Google are expanding their partnership with a multi-year agreement centered on CPUs and infrastructure chips. The deal focuses on optimizing AI systems by utilizing not only GPUs, but also IPUs. The collaboration is not a major shift; Intel has been supplying Xeon processors to Google Cloud for years, and infrastructure acceleration via IPUs or other technologies has existed for some time.
AI doesn’t just run on GPUs
GPUs are widely used in AI training, but both companies state they are only one part of the whole. CPUs remain crucial for tasks such as data processing, orchestration, and coordinating complex AI workloads.
According to Intel executive Lip-Bu Tan, AI does not run on individual chips, but on complete systems. Especially with agentic AI, where multiple steps and processes are combined, the pressure on CPUs is increasing.
New role for IPUs
In addition to CPUs, Intel and Google are placing a stronger emphasis on so-called Infrastructure Processing Units (IPUs). These chips take over certain tasks from CPUs, such as network traffic, storage management, and security. The goal is to offload the burden from CPUs and thus increase the efficiency of large-scale AI infrastructure. This can make a significant difference, particularly in hyperscale environments like cloud platforms.
However, the impact depends on where the bottlenecks are located. If limitations lie primarily with GPU memory or latency, IPUs offer less of an advantage.
