Irregular raises $80 million to secure frontier AI models

On Wednesday, AI security firm Irregular announced it has raised $80 million in new funding. The investment round was led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. A source close to the deal stated the round values Irregular at $450 million.

Co-founder Dan Lahav explained the company’s vision to TechCrunch, stating that a significant amount of future economic activity will stem from human-on-AI and AI-on-AI interaction. He believes this shift will break the existing security stack at multiple points.

Formerly known as Pattern Labs, Irregular is already a major player in AI evaluations. The company’s work is cited in security evaluations for Anthropic’s Claude 3.7 Sonnet as well as OpenAI’s o3 and o4-mini models. More broadly, the company’s framework for scoring a model’s vulnerability-detection ability, which is called SOLVE, is widely used within the industry.

While Irregular has done significant work assessing models’ existing risks, the company is now fundraising for an even more ambitious goal: spotting emergent risks and behaviors before they surface in the wild. The company has built an elaborate system of simulated environments that allow for intensive testing of a model prior to its release.

Co-founder Omer Nevo described their process, which includes complex network simulations where AI takes on the roles of both attacker and defender. This allows them to test new models and identify exactly where defenses hold up and where they fail.

Security has become a critical point of focus for the entire AI industry as the potential risks posed by frontier models continue to grow. OpenAI recently overhauled its internal security measures with an eye toward preventing corporate espionage. At the same time, AI models are becoming increasingly adept at finding software vulnerabilities, a capability that has serious implications for both attackers and defenders.

For the founders of Irregular, this is just the first of many security challenges caused by the rapidly growing capabilities of large language models. Lahav states that while the goal of frontier AI labs is to create increasingly sophisticated models, his company’s goal is to secure them. He acknowledges that it is a moving target, meaning there is much more work to be done in the future.