A new AI lab called Flapping Airplanes launched on Wednesday with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal of finding a less data-hungry way to train large models is a particularly interesting one. Based on what I’ve seen so far, I would rate them as Level Two on the trying-to-make-money scale.
But there’s something even more exciting about the Flapping Airplanes project that I hadn’t been able to identify until I read a post from Sequoia partner David Cahn. As Cahn describes it, Flapping Airplanes is one of the first labs to move beyond scaling, the relentless buildout of data and compute that has defined most of the industry so far.
He outlines two paradigms. The scaling paradigm argues for dedicating a huge amount of society’s resources toward scaling up today’s LLMs in the hopes that this will lead to AGI. The research paradigm argues that we are two to three research breakthroughs away from an AGI intelligence, and we should instead dedicate resources to long-running research, especially projects that may take five to ten years to come to fruition.
A compute-first approach would prioritize cluster scale above all else and would heavily favor short-term wins on the order of one to two years over long-term bets. A research-first approach would spread bets temporally and be willing to make many bets that have a low absolute probability of working, but that collectively expand the search space for what is possible.
It might be that the compute folks are right, and it’s pointless to focus on anything other than frenzied server buildouts. But with so many companies already pointed in that direction, it’s nice to see someone headed the other way.

