On Thursday, the Laude Institute announced its first batch of Slingshots grants, aimed at advancing the science and practice of artificial intelligence.
Designed as an accelerator for researchers, the Slingshots program provides resources that are often unavailable in most academic settings. These resources include funding, compute power, and product and engineering support. In exchange for this support, the recipients pledge to produce a final work product, which could be a startup, an open-source codebase, or another type of artifact.
The initial cohort consists of fifteen projects, with a particular focus on the difficult problem of AI evaluation. Some of these projects will be familiar to TechCrunch readers, including the command-line coding benchmark Terminal Bench and the latest version of the long-running ARC-AGI project.
Other projects take a fresh approach to long-established evaluation problems. Formula Code, built by researchers at CalTech and UT Austin, aims to evaluate AI agents’ ability to optimize existing code. The Columbia-based BizBench proposes a comprehensive benchmark for white-collar AI agents. Other grants explore new structures for reinforcement learning or model compression.
SWE-Bench co-founder John Boda Yang is also part of the cohort, leading the new CodeClash project. Inspired by the success of SWE-Bench, CodeClash will assess code through a dynamic competition-based framework.
John Boda Yang stated that he believes continued evaluation on core third-party benchmarks drives progress. He also expressed concern about a future where benchmarks become specific only to companies.

