Microsoft CEO Satya Nadella shared a video on Thursday showcasing his company’s first deployed massive AI system. He referred to it as an AI factory, a term favored by Nvidia, and promised it is the first of many such systems to be deployed across Microsoft Azure’s global data centers for running OpenAI workloads.
Each system is a cluster of more than 4,600 Nvidia GB300 rack computers. These computers feature the highly sought-after Blackwell Ultra GPU chip and are connected using Nvidia’s super-fast networking technology called InfiniBand. Beyond AI chips, Nvidia CEO Jensen Huang also secured a strong position in the InfiniBand market when his company acquired Mellanox for 6.9 billion dollars in 2019.
Microsoft has committed to deploying hundreds of thousands of Blackwell Ultra GPUs as it rolls out these systems worldwide. While the scale of these systems is remarkable, the timing of this announcement is also significant. It follows shortly after OpenAI, Microsoft’s partner and a well-documented frenemy, signed two high-profile data center deals with Nvidia and AMD.
In 2025, OpenAI has accumulated an estimated one trillion dollars in commitments to build its own data centers. CEO Sam Altman stated this week that more deals are on the way. Microsoft clearly wants to emphasize that it already possesses the data centers, with more than 300 facilities across 34 countries. The company stated it is uniquely positioned to meet the demands of frontier AI today. These massive AI systems are also described as capable of running the next generation of models with hundreds of trillions of parameters.
We expect to hear more about how Microsoft is preparing to serve AI workloads later this month. Microsoft CTO Kevin Scott will be speaking at TechCrunch Disrupt, which is scheduled for October 27 to October 29 in San Francisco.

