Why Cohere’s ex-AI research lead is betting against the scaling race

AI labs are racing to build data centers as large as Manhattan, with each one costing billions of dollars and consuming as much energy as a small city. This massive effort is driven by a deep belief in scaling, the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.

However, a growing chorus of AI researchers now says the scaling of large language models may be reaching its limits. They argue that other breakthroughs may be needed to significantly improve AI performance. This is the bet that Sara Hooker, Cohere’s former VP of AI Research and a Google Brain alumna, is taking with her new startup, Adaption Labs. She co-founded the company with fellow Cohere and Google veteran Sudip Roy.

The startup is built on the idea that scaling large language models has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, quietly announced the startup this month to begin broader recruiting. In an interview, Hooker stated that Adaption Labs is building AI systems that can continuously adapt and learn from their real-world experiences, and do so extremely efficiently. She declined to share details about the specific methods behind this approach.

Hooker explained that there is a turning point where it has become clear that the formula of just scaling models, which she calls scaling-pilled approaches, has not produced intelligence that can navigate or interact with the world. She believes that adapting is the heart of learning. For example, if you stub your toe on a table, you learn to step more carefully next time. AI labs have tried to capture this idea through reinforcement learning, which allows models to learn from mistakes in controlled settings. Yet, current reinforcement learning methods do not help AI models in production learn from their mistakes in real time; they simply repeat the error.

Some AI labs offer expensive consulting services to help enterprises fine-tune their AI models for custom needs. OpenAI reportedly requires customers to spend upwards of ten million dollars to access its consulting services for fine-tuning. Hooker observes that a handful of frontier labs determine the set of AI models that are served the same way to everyone, and they are very expensive to adapt. She believes this does not need to be true anymore, and that AI systems can learn very efficiently from an environment. Proving this would change the dynamics of who gets to control and shape AI and who these models ultimately serve.

Adaption Labs is a latest sign that the industry’s faith in scaling large language models is wavering. A recent paper from MIT researchers found that the world’s largest AI models may soon show diminishing returns. The sentiment in San Francisco also appears to be shifting. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted skeptical conversations with prominent AI researchers. Richard Sutton, a Turing award winner regarded as the father of reinforcement learning, told Patel that large language models cannot truly scale because they do not learn from real world experience. Early OpenAI employee Andrej Karpathy also expressed reservations about the long-term potential of reinforcement learning to improve AI models.

These types of concerns are not unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pretraining was hitting diminishing returns. Pretraining, where AI models learn patterns from massive datasets, had been the secret sauce for OpenAI and Google to improve their models. Those pretraining scaling concerns are now supported by data, but the AI industry has found other ways to improve models. In 2025, breakthroughs around AI reasoning models, which take extra time and computational resources to work through problems, have pushed AI capabilities even further.

AI labs now seem convinced that scaling up reinforcement learning and AI reasoning models are the new frontier. OpenAI researchers previously stated they developed their first AI reasoning model because they thought it would scale up well. Meta and Periodic Labs researchers recently released a paper exploring how reinforcement learning could scale performance further, a study that reportedly cost more than four million dollars, highlighting how expensive current approaches remain.

In contrast, Adaption Labs aims to find the next breakthrough and prove that learning from experience can be far cheaper. The startup was in talks to raise a twenty to forty million dollar seed round earlier this fall, according to investors who reviewed its pitch decks. They say the round has since closed, though the final amount is unclear. Hooker declined to comment on the funding but stated the company is set up to be very ambitious.

Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Compact AI systems now routinely outperform their larger counterparts on coding, math, and reasoning benchmarks, a trend Hooker wants to continue. She also built a reputation for broadening access to AI research globally by hiring talent from underrepresented regions such as Africa. While Adaption Labs will open a San Francisco office soon, Hooker says she plans to hire worldwide.

If Hooker and Adaption Labs are correct about the limitations of scaling, the implications could be huge. Billions of dollars have already been invested in scaling large language models under the assumption that bigger models will lead to general intelligence. However, it is possible that true adaptive learning could prove not only more powerful, but far more efficient.