Anthropic has limited public release of its new AI model, Mythos, because it is highly capable of finding software security exploits. Instead, it will share the model only with select large companies and critical infrastructure operators. The stated goal is to help these organizations defend against bad actors who might use similar AI for attacks.
However, industry experts suggest there may be more to this strategy. Some argue it effectively gates the most advanced models behind enterprise contracts, making it harder for competitors to use distillation techniques to copy them cheaply. This creates a business advantage for frontier labs like Anthropic. Another AI cybersecurity startup, Aisle, claims to replicate much of Mythos’s capability with smaller models, suggesting no single model is best for all tasks.
Anthropic and other leading labs have recently taken a harder line against model distillation, which threatens their capital-intensive business model. Whether Mythos truly poses a novel internet security threat is unclear, but its controlled release serves both security and commercial interests.

