HomeNews › Article

Google, Intel broaden AI chip partnership

AIGlobalApril 10, 2026
Source: Mobile World LiveCategory: AIRegion: Global
Google expanded its long‑running partnership with Intel, committing to use multiple generations of the chipmaker’s CPUs in its AI data centres. Under the agreement, Intel’s latest Xeon 6 processors will support AI training and inference workloads, bolstering the chipmaker’s position in an AI hardware market dominated by Nvidia. Google’s Amin Vahdat, SVP and chief Technologist, AI infrastructure, stated Intel’s Xeon roadmap gives his company confidence it “can continue to meet the growing performance and efficiency demands of our workloads”. Intel CEO Lip‑Bu Tan noted scaling AI requires more balanced systems rather than accelerators alone. The deal comes as CPUs regain strategic importance in AI systems, with processors  becoming a bottleneck as agentic AI workloads extend beyond GPUs. “The modern data centre needs to have CPUs to do much of the processing that goes on around the accelerators, and that’s increasingly important as we move to AI Inference from training,” analyst Jack Gold told Mobile World Live . “While we often focus on GPUs and what Nvidia is doing in that space, it misses looking at the need for CPUs that manage all the workloads and do a significant portion of processing beyond just AI acceleration.” He explained Google realises it needs to have powerful CPUs to do much of the cloud workloads it hosts, even as the tech giant looks to use some of its own custom CPUs for work on less intensive tasks. Gold stated X86 architecture still rules data centres which means Intel chips are in demand, as are AMD’s x86 CPUs. “That’s good news for Intel, who has already stated they are supply constrained in how many CPUs they can supply to the market because of high demand,” he said. Intel is manufacturing its latest Xeon chips using its advanced 18A process at its fab in the US state of Arizona. IPUs The two companies will also deepen co‑development of custom ASIC‑based infrastructure processing units (IPUs). The move comes despite Google’s growing use of in‑house chips, including its TPU AI accelerators. The IPUs offload networking, storage and security tasks from host CPUs, improving efficiency, utilisation and predictability at hyperscale. “Cloud system efficiency also needs to manage things like interconnect, storage and power management,” Gold explained. “For best efficiency, you need a custom designed IPU that is tailored to the specific infrastructure designs of the hyperscaler data centre.” Gold stated IPUs are important for Intel
Read original on Mobile World Live← Back to all news