Three years ago, Joe Fioti was working on chip design at Intel when he had a realization. He saw that while he was focused on creating the best chips, the more significant bottleneck was actually in the software. He understood that even the best hardware is useless if developers find it too difficult to use.
This insight led him to start a new company. On Monday, Luminal announced it has raised five point three million dollars in seed funding. The round was led by Felicis Ventures and included angel investments from Paul Graham, Guillermo Rauch, and Ben Porterfield. Fioti’s co-founders are Jake Stevens, who comes from Apple, and Matthew Gunton, who comes from Amazon. The company was also part of the Y Combinator Summer 2025 batch.
Luminal’s core business is selling compute, similar to other cloud companies like Coreweave or Lambda Labs. However, while those companies focus on GPUs, Luminal concentrates on optimization techniques. These techniques allow the company to get more compute power from its existing infrastructure. A key area of focus is optimizing the compiler that acts as the bridge between written code and the GPU hardware, addressing the very developer systems that frustrated Fioti in his previous role.
Currently, the industry’s leading compiler is Nvidia’s CUDA system, which is an underrated element in the company’s massive success. Since many parts of CUDA are open-source, Luminal is betting that there is significant value in building out the rest of the software stack, especially while many in the industry are still struggling to acquire enough GPUs.
Luminal is part of a growing group of startups focused on inference optimization. These companies have become more valuable as businesses seek faster and cheaper ways to run their AI models. Other providers like Baseten and Together AI have long specialized in this area, and smaller companies are now appearing to focus on more specific technical improvements.
Luminal and its peers will face strong competition from the optimization teams at major AI labs, which have the advantage of optimizing for a single family of models. In contrast, Luminal has to adapt to whatever model its clients use. Despite the risk of being outmatched by larger companies, Fioti believes the market is expanding so quickly that he is not concerned.
He states that it will always be possible to spend six months manually tuning a model for specific hardware to beat any compiler’s performance. However, Luminal’s big bet is that for anything less than that extreme effort, a general-purpose solution remains highly economically valuable.

