Reflection AI, a startup founded just last year by two former Google DeepMind researchers, has raised two billion dollars at an eight billion dollar valuation. This represents a fifteen-fold leap from its five hundred forty-five million dollar valuation just seven months ago. The company originally focused on autonomous coding agents but is now positioning itself as an open source alternative to closed frontier labs like OpenAI and Anthropic. It also aims to be a Western equivalent to Chinese AI firms like DeepSeek.
The startup was launched in March 2024 by Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-created the AlphaGo AI system. Their background in developing these advanced AI systems is central to their pitch, which argues that the right AI talent can build frontier models outside of established tech giants.
Along with its new funding round, Reflection AI announced it has recruited a team of top talent from DeepMind and OpenAI. The company has built an advanced AI training stack that it promises will be open for all. Perhaps most importantly, Reflection AI says it has identified a scalable commercial model that aligns with its open intelligence strategy.
The company’s team currently numbers about sixty people, mostly AI researchers and engineers across infrastructure, data training, and algorithm development. Reflection AI has secured a compute cluster and hopes to release a frontier language model next year that is trained on tens of trillions of tokens.
The company stated that it has built something once thought possible only inside the world’s top labs: a large-scale language model and reinforcement learning platform capable of training massive Mixture-of-Experts models at frontier scale. After seeing the effectiveness of this approach in autonomous coding, the company is now bringing these methods to general agentic reasoning.
Mixture-of-Experts refers to a specific architecture that powers frontier large language models. Previously, only large, closed AI labs were capable of training these systems at scale. The company cited DeepSeek’s breakthrough in training these models openly, followed by other Chinese models like Qwen and Kimi, as a wake-up call.
The CEO, Misha Laskin, expressed that if Western companies do not act, the global standard of intelligence will be built by someone else, and it will not be built by America. He added that this situation puts the United States and its allies at a competitive disadvantage, as enterprises and sovereign states often cannot use Chinese models due to potential legal repercussions.
American technologists have largely celebrated Reflection AI’s new mission. David Sacks, the White House AI and Crypto Czar, stated it is great to see more American open source AI models, noting that a meaningful segment of the global market will prefer the cost, customizability, and control that open source offers. The co-founder and CEO of Hugging Face also called the funding great news for American open-source AI, while noting the challenge will be to show a high velocity of sharing open models and datasets.
Reflection AI’s definition of being open seems to center on access rather than full development transparency, similar to strategies from Meta with Llama or Mistral. The company plans to release model weights for public use while largely keeping datasets and full training pipelines proprietary. Laskin stated that the model weights are the most impactful component, as anyone can use and tinker with them, whereas the infrastructure stack is only usable by a select handful of companies.
This balance also underpins the company’s business model. Researchers will be able to use the models freely, but revenue will come from large enterprises building products on top of the models and from governments developing sovereign AI systems. Laskin explained that large enterprises want open models they can own, run on their own infrastructure, control costs for, and customize for various workloads.
Reflection AI has not yet released its first model, which will be largely text-based with multimodal capabilities planned for the future. The company will use the funds from this latest round to acquire the compute resources needed to train the new models, with the first one aiming for release early next year.
Investors in this latest round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, and CRV, among others.

