Google’s answer to the AI arms race — promote the guy behind its data centertech

Google has made a major move in the AI infrastructure arms race by elevating Amin Vahdat to the newly created position of chief technologist for AI infrastructure. He will report directly to CEO Sundar Pichai. This promotion signals the critical importance of this work as Google pours up to 93 billion dollars into capital expenditures by the end of 2025, a number parent company Alphabet expects will grow significantly next year.

Vahdat is not new to this field. The computer scientist, who holds a PhD from UC Berkeley and began as a research intern at Xerox PARC in the early 1990s, has been building Google’s AI backbone for the past 15 years. Before joining Google in 2010 as an engineering fellow and vice president, he was an associate professor at Duke University and later a professor at UC San Diego. His academic credentials are formidable, with approximately 395 published papers, and his research has long focused on making computers work more efficiently at massive scale.

Vahdat already maintains a high profile within Google. Just eight months ago, in his role as vice president and general manager of ML, Systems, and Cloud AI, he unveiled the company’s seventh-generation TPU, called Ironwood, at Google Cloud Next. The specifications revealed were staggering, with over 9,000 chips per pod delivering 42.5 exaflops of compute power. Vahdat stated this was more than 24 times the power of the world’s top supercomputer at the time. He also told the audience that demand for AI compute has increased by a factor of 100 million in just eight years.

Behind the scenes, Vahdat has been orchestrating the essential work that keeps Google competitive. This includes developing custom TPU chips for AI training and inference, which provide an edge over rivals like OpenAI. He also oversees the Jupiter network, the super-fast internal network that allows Google’s servers to communicate and move massive amounts of data. Vahdat has noted that Jupiter now scales to 13 petabits per second, bandwidth theoretically sufficient to support a video call for all eight billion people on Earth simultaneously. This infrastructure is the invisible plumbing connecting everything from YouTube and Search to Google’s massive AI training operations across hundreds of data centers worldwide.

Vahdat has also been deeply involved in the ongoing development of the Borg software system, Google’s cluster management system that acts as the brain coordinating work across its data centers. Furthermore, he oversaw the development of Axion, Google’s first custom Arm-based general-purpose CPUs designed for data centers, which the company unveiled last year.

In short, Vahdat is central to Google’s AI story. In a market where top AI talent commands astronomical compensation and faces constant recruitment, Google’s decision to elevate Vahdat to the C-suite may also be a strategic move for retention. When you have spent 15 years building someone into a linchpin of your AI strategy, you make sure they stay.