At the center of every empire is an ideology, a belief system that propels the system forward and justifies expansion, even when the cost of that expansion directly defies the ideology’s stated mission. For European colonial powers, it was Christianity and the promise of saving souls while extracting resources. For today’s AI empire, it is artificial general intelligence to benefit all humanity. OpenAI is its chief evangelist, spreading zeal across the industry in a way that has reframed how AI is built.
Journalist Karen Hao, author of the book “Empire of AI,” described interviewing people whose voices were shaking from the fervor of their beliefs in AGI. In her book, Hao likens the AI industry in general, and OpenAI in particular, to an empire. She states that the only way to understand the scope and scale of OpenAI’s behavior is to recognize that they have grown more powerful than pretty much any nation state in the world, consolidating an extraordinary amount of not just economic power, but also political power. She argues they are terraforming the Earth, rewiring our geopolitics and all of our lives, an effort that can only be described as an empire.
OpenAI has described AGI as a highly autonomous system that outperforms humans at most economically valuable work, one that will elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge. These nebulous promises have fueled the industry’s exponential growth, its massive resource demands, oceans of scraped data, strained energy grids, and willingness to release untested systems into the world. All of this is in service of a future that many experts say may never arrive.
Hao says this path was not inevitable, and that scaling is not the only way to achieve advances in AI. You can also develop new techniques in algorithms and improve existing algorithms to reduce the amount of data and compute they need. But that tactic would have meant sacrificing speed. When you define the quest to build beneficial AGI as one where the victor takes all, which is what OpenAI did, then the most important thing is speed over anything else. This means speed over efficiency, speed over safety, and speed over exploratory research.
For OpenAI, the best way to guarantee speed was to take existing techniques and just do the intellectually cheap thing, which is to pump more data and more supercomputers into those existing techniques. OpenAI set the stage, and rather than fall behind, other tech companies decided to fall in line. Because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, you have an entire discipline now being shaped by the agenda of these companies rather than by real scientific exploration.
The financial spend has been, and will be, astronomical. Last week, OpenAI said it expects to burn through 115 billion dollars in cash by 2029. Meta said in July that it would spend up to 72 billion dollars on building AI infrastructure this year. Google expects to hit up to 85 billion dollars in capital expenditures for 2025, most of which will be spent on expanding AI and cloud infrastructure.
Meanwhile, the goal posts keep moving, and the loftiest benefits to humanity have not yet materialized, even as the harms mount. These harms include job loss, concentration of wealth, and AI chatbots that fuel delusions and psychosis. In her book, Hao documents workers in developing countries like Kenya and Venezuela who were exposed to disturbing content, including child sexual abuse material, and were paid very low wages of around one to two dollars an hour for roles in content moderation and data labeling.
Hao said it is a false tradeoff to pit AI progress against present harms, especially when other forms of AI offer real benefits. She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is trained on amino acid sequence data and complex protein folding structures. It can accurately predict the 3D structure of proteins from their amino acids, which is profoundly useful for drug discovery and understanding disease. Those are the types of AI systems that we need. AlphaFold does not create mental health crises in people, lead to colossal environmental harms, or create content moderation harms because its datasets do not contain the toxic material scraped from the internet.
Alongside the quasi-religious commitment to AGI has been a narrative about the importance of racing to beat China in the AI race so that Silicon Valley can have a liberalizing effect on the world. Hao states that literally the opposite has happened. The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world. The only actor that has come out of it unscathed is Silicon Valley itself.
Many will argue that OpenAI and other AI companies have benefitted humanity by releasing ChatGPT and other large language models, which promise huge gains in productivity by automating tasks like coding, writing, research, and customer support. But the way OpenAI is structured, part non-profit and part for-profit, complicates how it defines and measures its impact on humanity. This is further complicated by the news that OpenAI reached an agreement with Microsoft that brings it closer to eventually going public.
Two former OpenAI safety researchers expressed fear that the AI lab has begun to confuse its for-profit and non-profit missions, believing that because people enjoy using ChatGPT, this ticks the box of benefiting humanity. Hao echoed these concerns, describing the dangers of being so consumed by the mission that reality is ignored. Even as evidence accumulates that what they are building is actually harming significant amounts of people, the mission continues to paper all of that over. There is something really dangerous and dark about being so wrapped up in a belief system you constructed that you lose touch with reality.