Running an AI product demands immense computing power. As the tech industry races to harness the potential of AI models, a parallel competition is underway to construct the infrastructure needed to support them. On a recent earnings call, Nvidia CEO Jensen Huang estimated that between three and four trillion dollars will be spent on AI infrastructure by the end of the decade, with much of that investment coming from the AI companies themselves. This spending spree is placing immense strain on power grids and pushing the industry’s construction capacity to its limits.
Microsoft made a one billion dollar investment in OpenAI in 2019. This deal is widely considered the catalyst for the contemporary AI boom. Crucially, the agreement made Microsoft the exclusive cloud provider for OpenAI. As the demands of model-training grew, more of Microsoft’s investment came in the form of Azure cloud credits rather than cash. This arrangement benefited both parties. Microsoft recorded more Azure sales, and OpenAI received funding for its biggest expense. Microsoft later increased its investment to nearly fourteen billion dollars.
The partnership between the two companies has since evolved. In January, OpenAI announced it would no longer use Microsoft’s cloud exclusively. The company now gives Microsoft a right of first refusal on future infrastructure demands but will pursue other providers if Azure cannot meet its needs. More recently, Microsoft began exploring other foundation models to power its AI products, establishing greater independence from OpenAI.
The success of the OpenAI and Microsoft deal created a common practice for AI services to partner with specific cloud providers. Anthropic has received eight billion dollars in investment from Amazon, making kernel-level modifications to Amazon’s hardware to better suit AI training. Google Cloud has also signed on smaller AI companies like Loveable and Windsurf as primary computing partners, though these deals did not involve direct investment. Even OpenAI has secured new backing, receiving a one hundred billion dollar investment from Nvidia to purchase more of the company’s GPUs.
On June 30th, 2025, Oracle revealed in an SEC filing that it had signed a thirty billion dollar cloud services deal with an unnamed partner, a sum greater than the company’s cloud revenues for the entire previous fiscal year. OpenAI was later revealed as the partner, securing Oracle a spot alongside Google as one of OpenAI’s hosting providers. The announcement caused Oracle’s stock to rise significantly.
A few months later, on September 10th, Oracle announced a five-year, three hundred billion dollar deal for compute power with OpenAI, set to begin in 2027. Oracle’s stock climbed even higher following the news. The sheer scale of the deal is stunning, as OpenAI does not currently have three hundred billion dollars to spend. The figure presumes immense growth for both companies. Even before any money is spent, the deal has cemented Oracle as a leading AI infrastructure provider.
For companies like Meta that already have significant legacy infrastructure, the story is more complicated but equally expensive. Mark Zuckerberg has stated that Meta plans to spend six hundred billion dollars on US infrastructure through the end of 2028. In just the first half of 2025, the company spent thirty billion dollars more than the previous year, driven largely by its growing AI ambitions. Some spending goes toward cloud contracts, like a recent ten billion dollar deal with Google Cloud, but even more resources are being poured into massive new data centers.
A new 2,250-acre site in Louisiana, named Hyperion, will cost an estimated ten billion dollars to build and provide an estimated five gigawatts of compute power. Notably, the site includes an arrangement with a local nuclear power plant to handle the energy load. A smaller site in Ohio, called Prometheus, is expected to come online in 2026 and will be powered by natural gas.
This kind of buildout comes with environmental costs. Elon Musk’s xAI built its own hybrid data center and power-generation plant in South Memphis, Tennessee. The plant has quickly become one of the county’s largest emitters of smog-producing chemicals, thanks to natural gas turbines that experts say violate the Clean Air Act.
Just two days after his second inauguration, President Trump announced a joint venture between SoftBank, OpenAI, and Oracle, meant to spend five hundred billion dollars building AI infrastructure in the United States. Named Stargate after the 1994 film, the project was announced with immense hype. Trump called it the largest AI infrastructure project in history, and Sam Altman agreed, calling it the most important project of the era.
The plan was for SoftBank to provide funding, with Oracle handling the construction with input from OpenAI. President Trump promised to clear away regulatory hurdles. However, there were doubts from the beginning, including from Elon Musk, who claimed the project did not have the necessary funds. As the initial hype faded, the project lost some momentum. In August, Bloomberg reported that the partners were failing to reach a consensus. Nonetheless, construction has moved forward on eight data centers in Abilene, Texas, with the final building set to be finished by the end of 2026.

