Harmonic, an AI startup co-founded by Robinhood CEO Vlad Tenev, announced the beta launch of its iOS and Android chatbot app on Monday. The app allows users to interact with Harmonic’s AI model, Aristotle. The company aims to expand access to Aristotle, which it claims provides “hallucination-free” answers for questions involving mathematical reasoning—a bold claim given the reliability challenges faced by current AI models.
Harmonic is focused on developing “mathematical superintelligence” (MSI), with the long-term goal of assisting users in fields that rely on math, such as physics, statistics, and computer science. Tudor Achim, Harmonic’s CEO and co-founder, stated in an interview with TechCrunch that Aristotle is the first publicly available product capable of formal reasoning and verified outputs. He emphasized that within quantitative reasoning domains, the model guarantees no hallucinations.
The company also plans to release an API for enterprise access to Aristotle, along with a web app for consumers. Harmonic reported that Aristotle achieved gold medal performance on the 2025 International Math Olympiad (IMO) through a formal test, where problems were translated into a machine-readable format. While Google and OpenAI also developed AI models that reached gold medal performance in this year’s IMO, their tests were conducted informally using natural language.
Harmonic has not released additional benchmarks for Aristotle at this time. The beta launch follows the company’s recent $100 million Series B funding round, led by Kleiner Perkins, valuing the startup at $875 million. Achim stated that Harmonic is progressing rapidly toward achieving MSI and that investors considered the valuation fair given the company’s ambitious goals.
Several major tech companies are training AI models to solve math problems, as mathematical proficiency is seen as a key indicator of reasoning ability. Systems that excel in math may prove useful in other domains. Achim explained that Harmonic ensures accuracy by having Aristotle generate responses in the open-source programming language Lean. Before delivering an answer, the model algorithmically verifies the solution without relying on AI—a method also used in high-stakes fields like medical devices and aviation.
Achieving hallucination-free performance in AI, even within a narrow domain, remains a significant challenge. Studies indicate that even top AI models frequently hallucinate, and the issue persists in newer models. OpenAI’s latest reasoning models, for example, hallucinate more than their predecessors.