As AI coding tools generate billions of lines of code each month, a new bottleneck is emerging: ensuring that software works as intended. Qodo, a startup building AI agents for code review, testing, and governance, is betting that verification will define the next phase of software development.
The New York-headquartered startup has raised a 70 million dollar Series B round led by Qumra Capital, bringing its total funding to 120 million dollars. Other investors in the round include Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, Peter Welender of OpenAI, and Clara Shih of Meta.
Qodo aims to serve as a layer focused on improving trust in AI-generated code as enterprises accelerate adoption of tools like GitHub Copilot and Claude Code. Many are discovering that faster code output does not necessarily translate into reliable or secure software.
While most AI review tools focus on what changed, Qodo focuses on how code changes affect entire systems. It factors in organizational standards, historical context, and risk tolerance to help companies manage AI-generated code more confidently.
Itamar Friedman founded Qodo in 2022. He previously co-founded Visualead and led the machine vision business at Alibaba, which acquired his first startup. He told TechCrunch that two key moments inspired Qodo: his time at Mellanox, which was later acquired by Nvidia, and building Visualead. He started Qodo just months before the launch of ChatGPT.
At Mellanox, where he worked on automating hardware verification using machine learning, he realized that generating systems and verifying systems require very different approaches and tools. Later, at Alibaba’s Damo Academy, he saw AI evolve toward systems capable of reasoning over human language. By 2021–2022, it became clear to him that AI would generate a large share of the world’s content, especially code, reinforcing his view that code generation and verification would require fundamentally different systems.
A recent survey shows that while 95 percent of developers do not fully trust AI-generated code, only 48 percent consistently review it before committing. This highlights a significant gap between awareness and practice.
“Code generation companies are largely built around LLMs. But for code quality and governance, LLMs alone are not enough,” Friedman said. “Quality is subjective. It depends on organizational standards, past decisions, and tribal knowledge. An LLM cannot fully understand that context. It is like taking a great engineer from one company and asking them to review code at another — they lack the internal context.”
He explained that while companies such as OpenAI and Anthropic help shape the broader AI narrative, including in adjacent areas like code review, they are largely focused on building features rather than end-to-end solutions. Although other startups exist in this space, many remain early stage and have yet to see widespread enterprise adoption.
Qodo is leaning into performance to stand out. The startup recently ranked first on a prominent code review benchmark, scoring 64.3 percent. This was more than 10 points ahead of the next competitor and 25 points ahead of Claude Code Review. The benchmark highlights its ability to catch tricky logic bugs and cross-file issues without overwhelming developers.
In the past month, it has launched Qodo 2.0, a multi-agent code review system now leading current benchmarks, and introduced tools that learn each organization’s definition of code quality.
The company is already working with major enterprises such as NVIDIA, Walmart, Red Hat, Intuit, and Texas Instruments, as well as high-growth firms like Monday.com and JFrog.
“Every year has had a defining moment — from Copilot to ChatGPT to full task automation,” Friedman said. “Now we are entering a new phase: moving from stateless AI to stateful systems — from intelligence to ‘artificial wisdom.’ That is what Qodo is built for.”

