On Wednesday, researchers at Microsoft released a new simulation environment designed to test AI agents. They also published new research showing that current agentic models may be vulnerable to manipulation. Conducted in collaboration with Arizona State University, the research raises new questions about how well AI agents will perform when working unsupervised. It also questions how quickly AI companies can make good on promises of an agentic future.
The simulation environment, called the Magentic Marketplace by Microsoft, is built as a synthetic platform for experimenting on AI agent behavior. A typical experiment might involve a customer agent trying to order dinner according to a user’s instructions, while agents representing various restaurants compete to win the order.
The team’s initial experiments included one hundred separate customer-side agents interacting with three hundred business-side agents. Because the source code for the marketplace is open source, it should be straightforward for other groups to adopt the code to run new experiments or reproduce findings.
Ece Kamar, managing director of Microsoft Research’s AI Frontiers Lab, says this kind of research will be critical to understanding the capabilities of AI agents. She stated that there is a real question about how the world will change by having these agents collaborate and negotiate with each other. The goal is to understand these things deeply.
The initial research looked at a mix of leading models and found some surprising weaknesses. In particular, the researchers found several techniques businesses could use to manipulate customer agents into buying their products. The researchers noticed a particular falloff in efficiency as a customer agent was given more options to choose from, which overwhelmed the attention space of the agent.
The researchers want these agents to help with processing a lot of options, but they are seeing that the current models are actually getting really overwhelmed by having too many options. The agents also ran into trouble when they were asked to collaborate toward a common goal, apparently unsure of which agent should play what role in the collaboration.
Performance improved when the models were given more explicit instructions on how to collaborate. However, the researchers still saw the models’ inherent capabilities as in need of improvement. While models can be instructed step by step, the expectation is that these collaboration capabilities should be present by default if we are to test their inherent abilities.

