A new test for AI labs: Are you even trying to make money?

We are in a unique moment for AI companies building their own foundation models. First, a generation of industry veterans who made their names at major tech companies are now going solo. At the same time, legendary researchers with immense experience but ambiguous commercial aspirations are entering the field. There is a clear chance that some of these new labs will become behemoths on the scale of OpenAI, but there is also room for them to focus on interesting research without heavy pressure to commercialize.

The end result is that it is becoming difficult to tell who is actually trying to make money. To simplify this, I propose a sliding scale for any company developing a foundation model. This five-level scale does not measure whether a company is making money, but whether it is trying to. The idea is to gauge ambition, not success.

Think of it in these terms. Level 5 means a company is already making millions of dollars every day. Level 4 means it has a detailed, multi-stage plan to become one of the richest entities on Earth. Level 3 means it has many promising product ideas to be revealed in time. Level 2 means it has only the outlines of a concept of a plan. Level 1 represents a philosophical stance where true wealth is defined as loving yourself.

The big names, like OpenAI and Anthropic, are all at Level 5. The scale becomes more interesting with the new generation of labs, which have big dreams but ambitions that are harder to read. Crucially, the people involved in these labs can generally choose whatever level they want. There is so much money in AI right now that investors will not necessarily demand a business plan. Even if a lab is just a research project, investors may be happy to be involved. If you are not particularly motivated to become a billionaire, you might live a happier life at Level 2 than at Level 5.

Problems arise because it is not always clear where an AI lab lands on this scale. Much of the current drama in the AI industry stems from this confusion. Consider the anxiety over OpenAI’s conversion from a non-profit; the lab spent years at Level 1, then jumped to Level 5 almost overnight. Conversely, one could argue that Meta’s early AI research was firmly at Level 2, while the company itself aimed for Level 4.

With that in mind, here is a quick rundown of four contemporary AI labs and how they measure up.

Humans& was big AI news recently and part of the inspiration for this scale. The founders have a compelling pitch for the next generation of AI models, shifting from scaling laws to an emphasis on communication and coordination tools. Despite glowing press, Humans& has been coy about how this translates into monetizable products. It seems to want to build products but will not commit to anything specific. The most they have said is that they will build an AI workplace tool to replace products like Slack and Google Docs, while redefining how such tools work fundamentally. This is just specific enough to place them at Level 3.

Thinking Machines Lab is very hard to rate. Generally, if a former ChatGPT project lead raises a $2 billion seed round, you assume a specific roadmap. I would have felt good putting TML at Level 4. However, recent weeks saw the departure of its CTO and co-founder, along with several other employees, many citing concerns about the company’s direction. Nearly half the founding executives are now gone. One interpretation is that they thought they had a solid plan to become a world-class lab, only to find it was not as solid as they thought. They may have wanted Level 4 but realized they were at Level 2 or 3. There is not quite enough evidence for a full downgrade, but it is close.

World Labs, founded by the highly respected AI researcher Fei-Fei Li, initially seemed like a Level 2 or lower endeavor when it raised $230 million over a year ago. But a lot can change in the AI world. Since then, World Labs has shipped both a full world-generating model and a commercialized product built on top of it. We have also seen real demand for world-modeling from industries like video games and special effects, with no major labs offering direct competition. The result looks very much like a Level 4 company, perhaps soon to reach Level 5.

Safe Superintelligence, founded by former OpenAI chief scientist Ilya Sutskever, seems like a classic Level 1 startup. Sutskever has kept SSI insulated from commercial pressures, even turning down an acquisition attempt from Meta. There are no product cycles and, aside from the foundational model in development, no product at all. With this pitch, he raised $3 billion. Every indication is that this is a genuinely scientific project. However, the AI world moves fast, and it would be foolish to count SSI out commercially. In a recent interview, Sutskever suggested two reasons for a potential pivot: if research timelines turn out to be very long, or if there is compelling value in having the most powerful AI impact the world. In other words, if the research goes very well or very badly, SSI could jump up several levels quickly.