OpenAI astounded the tech industry for the second time this week by launching its newest flagship model, GPT-5, just days after releasing two new freely available models under an open-source license.
OpenAI CEO Sam Altman called GPT-5 “the best model in the world,” though this claim may be seen as pride or hyperbole. Reports indicate that GPT-5 only slightly outperforms other leading AI models from Anthropic, Google DeepMind, and xAI on some key benchmarks while lagging slightly on others. Still, it performs well across a variety of uses, particularly coding. Altman also highlighted its competitive pricing, stating, “Very happy with the pricing we are able to deliver!”
The top-level GPT-5 API costs $1.25 per 1 million tokens of input and $10 per 1 million tokens for output, with an additional $0.125 per 1 million tokens for cached input. This pricing mirrors Google’s Gemini 2.5 Pro basic subscription, which is also popular for coding tasks. However, Google charges more if inputs or outputs exceed a heavy threshold of 200,000 prompts, meaning high-volume customers end up paying more.
OpenAI is significantly undercutting Anthropic’s Claude Opus 4.1, which starts at $15 per 1 million input tokens and $75 per 1 million output tokens. Anthropic does offer discounts for prompt caching and batch processing, which can reduce costs for some users.
Claude Opus has been popular among programmers, both as an option in the coding assistant Cursor and as the engine behind its own Claude Code assistant. Notably, Cursor added GPT-5 as an option minutes after its announcement.
Developers with early access to GPT-5 have praised its pricing. Simon Willison, featured in OpenAI’s launch video, wrote in his review, “The pricing is aggressively competitive with other providers.” Others on social media and forums have called OpenAI’s fees a “pricing killer” and expressed similar approval.
The question now is whether competitors like Anthropic or Google will respond with lower prices. Google has previously undercut OpenAI on pricing, and if others follow suit, we may see the start of a long-awaited large language model price war.
A price war would be welcome, especially for startups relying on AI models. Many coding tool providers struggle with high and unpredictable fees paid to model makers, making their business models unstable. Countless startups building on top of AI models would also benefit from lower costs.
Silicon Valley has hoped for improvements in the price-to-performance ratio of LLMs, along with reduced inference costs. However, with the tech industry investing hundreds of billions in data centers and AI infrastructure, such improvements seemed years away.
OpenAI itself has a $30 billion-per-year contract with Oracle for data center capacity, despite only recently reaching $10 billion in annual recurring revenue. Meta plans to spend up to $72 billion on AI infrastructure in 2025, and Alphabet has allocated $85 billion for capital expenditures, driven by AI demand. With such massive investments, costs typically rise rather than fall.
Given these enormous expenses, startups facing rising API bills may find it too soon to celebrate OpenAI’s pricing move. Yet this week, OpenAI has challenged the market twice, putting pressure on competitors to lower prices. Whether others will follow remains to be seen.