How a once-tiny research lab helped Nvidia become a $4 trillion-dollar company

When Bill Dally joined Nvidia’s research lab in 2009, it employed only about a dozen people and was focused on ray tracing, a rendering technique used in computer graphics. That once-small research lab now employs more than 400 people and has played a key role in transforming Nvidia from a video game GPU startup in the nineties to a $4 trillion company driving the artificial intelligence revolution.

Today, the company’s research lab is shifting its focus toward developing the technology needed to power robotics and AI. Some of that work is already making its way into products. Recently, Nvidia unveiled a new set of world AI models, libraries, and infrastructure designed for robotics developers.

Dally, now Nvidia’s chief scientist, first consulted for the company in 2003 while working at Stanford. When he decided to step down as chair of Stanford’s computer science department a few years later, he planned to take a sabbatical. Nvidia, however, had other ideas.

David Kirk, then leading the research lab, along with CEO Jensen Huang, believed Dally would be a better fit in a permanent role. Dally recalls how the two made a strong case for why he should join Nvidia’s research team—and eventually convinced him.

“It turned out to be a perfect fit for my interests and skills,” Dally said. “Everyone searches for where they can make the biggest impact, and for me, that place is definitely Nvidia.”

When Dally took over the lab in 2009, expansion was the top priority. Researchers quickly branched out from ray tracing into new areas, including circuit design and very large-scale integration (VLSI), a process that integrates millions of transistors onto a single chip. The lab has continued growing ever since.

“We focus on what will make the most positive difference for the company,” Dally explained. “Some areas show promise, but we have to assess whether we can truly excel in them.”

For years, that meant improving GPUs for artificial intelligence. Nvidia recognized the potential of AI early, experimenting with AI-specific GPUs as far back as 2010—long before the current AI boom.

“We saw that AI was going to change the world,” Dally said. “We doubled down on it, specialized our GPUs, and developed software to support it. We engaged with researchers globally long before AI became mainstream.”

Now, with Nvidia leading the AI GPU market, the company is exploring new frontiers beyond data centers. One major focus is physical AI and robotics.

“Robots will eventually play a huge role in the world, and we want to provide the brains for all of them,” Dally said. “To do that, we need to develop the foundational technologies.”

This is where Sanja Fidler, Nvidia’s vice president of AI research, comes in. Fidler joined the company in 2018, bringing expertise in simulation models for robots. Her work at MIT had already caught Huang’s attention at a researchers’ event.

“I couldn’t resist joining,” Fidler said. “It was a perfect topic fit and a great cultural fit. Jensen told me, ‘Come work with me,’ not just for the company.”

She established a research lab in Toronto focused on Omniverse, Nvidia’s platform for building simulations for physical AI. The first challenge was acquiring the necessary 3D data—finding enough images and developing technology to convert them into usable 3D models.

“We invested in differentiable rendering, which makes rendering compatible with AI,” Fidler explained. “Traditional rendering goes from 3D to images, but we needed to reverse the process.”

Omniverse released its first model, GANverse3D, in 2021, converting images into 3D models. The team then tackled video, using footage from robots and self-driving cars to create simulations through its Neural Reconstruction Engine, introduced in 2022.

These technologies formed the foundation for Nvidia’s Cosmos family of world AI models, announced earlier this year. Now, the lab is working to make these models faster. For robots, reaction time is critical—they need to process information much quicker than real-time.

“The robot doesn’t need to observe the world at the same speed as humans,” Fidler said. “If we can make these models significantly faster, they’ll be incredibly useful for robotics and physical AI.”

Nvidia recently announced new world AI models designed to generate synthetic data for training robots, along with libraries and infrastructure for developers. Despite the excitement around humanoid robots, the team remains realistic about the timeline.

Both Dally and Fidler believe humanoid robots in homes are still years away, comparing the current hype to the early days of autonomous vehicles.

“We’re making huge progress, and AI is the key enabler,” Dally said. “From visual AI for perception to generative AI for task planning, each breakthrough brings us closer. As we solve these challenges and gather more data, robots will continue to evolve.”