Are AI agents ready for the workplace? A new benchmark raises doubts.

Nearly two years ago, Microsoft CEO Satya Nadella predicted that AI would replace knowledge work, the white-collar jobs held by lawyers, investment bankers, librarians, accountants, IT professionals, and others. Despite the enormous progress made by foundation models, this transformation in knowledge work has been slow to arrive. Models have mastered in-depth research and agentic planning, yet most white-collar work remains relatively unaffected.

This is one of the biggest mysteries in AI, and thanks to new research from the training-data company Mercor, we are finally getting some answers. The research examines how leading AI models perform actual white-collar work tasks drawn from consulting, investment banking, and law. The result is a new benchmark called APEX-Agents, and so far, every AI lab is receiving a failing grade. When faced with queries from real professionals, even the best models struggled to get more than a quarter of the questions right. The vast majority of the time, the models returned a wrong answer or no answer at all.

According to Mercor CEO Brendan Foody, who worked on the paper, the models’ biggest stumbling point was tracking down information across multiple domains, a skill integral to most knowledge work performed by humans. He explained that the benchmark built out an entire environment modeled after real professional services. In real life, professionals operate across tools like Slack and Google Drive, not from a single source of context. For many agentic AI models, that kind of multi-domain reasoning is still hit or miss.

The benchmark scenarios were drawn from actual professionals on Mercor’s expert marketplace, who both designed the queries and set the standard for a successful response. Reviewing the publicly available questions gives a sense of the tasks’ complexity. One example from the law section asks whether, under a company’s own policies and relevant EU privacy laws, it can reasonably treat certain data exports as consistent with a specific article. The correct answer is yes, but arriving there requires an in-depth assessment of both corporate policy and legal regulations.

While such a question might stump a well-informed human, the researchers aimed to model the real work done by professionals. If a large language model could reliably answer these questions, it could effectively replace many lawyers working today. Foody stated that this is probably the most important topic in the economy and that the benchmark is very reflective of the real work these people do.

OpenAI also attempted to measure professional skills with its GDPval benchmark, but the APEX-Agents test differs in important ways. Where GDPval tests general knowledge across a wide range of professions, APEX-Agents measures a system’s ability to perform sustained tasks in a narrow set of high-value professions. The result is a more difficult test for models, but also one more closely tied to the potential for automating these jobs.

While none of the models proved ready to take over as investment bankers, some performed closer to the mark than others. Gemini 3 Flash performed best with 24% one-shot accuracy, followed closely by GPT-5.2 with 23%. Below that, Opus 4.5, Gemini 3 Pro, and GPT-5 all scored roughly 18%.

Although the initial results fall short, the AI field has a history of blowing through challenging benchmarks. Now that the APEX-Agents test is public, it represents an open challenge for AI labs that believe they can do better, something Foody fully expects in the coming months. He noted that the technology is improving rapidly, comparing current models to an intern that gets it right a quarter of the time, whereas last year it was an intern that got it right only five or ten percent of the time. That kind of year-after-year improvement can have an impact very quickly.