Indian AI lab Sarvam unveiled a new generation of large language models on Tuesday. The company is betting that smaller, efficient open-source AI models can grab market share away from the more expensive systems offered by larger U.S. and Chinese rivals. The launch was announced at the India AI Impact Summit in New Delhi and aligns with the Indian government’s push to reduce reliance on foreign AI platforms and tailor models to local languages and use cases.
Sarvam’s new lineup includes 30-billion and 105-billion parameter models, a text-to-speech model, a speech-to-text model, and a vision model for parsing documents. This marks a sharp upgrade from the company’s 2-billion-parameter Sarvam 1 model released in October 2024.
The 30-billion and 105-billion parameter models use a mixture-of-experts architecture. This design activates only a fraction of their total parameters at a time, which significantly reduces computing costs. The 30B model supports a 32,000-token context window for real-time conversational use, while the larger 105B model offers a 128,000-token window for complex, multi-step reasoning tasks. Sarvam positions its 30B model against competitors like Google’s Gemma 27B and OpenAI’s GPT-OSS-20B, while the 105B model is touted to compete with OpenAI’s GPT-OSS-120B and Alibaba’s Qwen-3-Next-80B.
Sarvam stated the new AI models were trained from scratch rather than fine-tuned on existing open-source systems. The 30B model was pre-trained on about 16 trillion tokens of text, and the 105B model was trained on trillions of tokens spanning multiple Indian languages. The models are designed to support real-time applications like voice-based assistants and chat systems in Indian languages.
The startup said the models were trained using computing resources provided under the government-backed IndiaAI Mission, with infrastructure support from data center operator Yotta and technical support from Nvidia.
Sarvam executives plan to take a measured approach to scaling its models, focusing on real-world applications rather than raw size. The company intends to open-source the 30B and 105B models, though it did not specify if the training data or full training code would also be made public.
Sarvam also outlined plans to build specialized AI systems, including coding-focused models and enterprise tools under a product called Sarvam for Work, and a conversational AI agent platform called Samvaad. Founded in 2023, Sarvam has raised more than $50 million in funding from investors including Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners.

