Sarvam-M: Indian Startup Unveils Powerful Homegrown Language Model

Indian firm Sarvam has launched Sarvam-M, a large language model built for Indian languages and education, sparking attention and debate in the Artificial Intelligence space.

Indian tech startup Sarvam has launched Sarvam-M, its flagship large language model, aiming to fill a unique niche in the global Artificial Intelligence landscape by focusing on Indian languages, mathematics, and programming tasks. The model, which is based on the Mistral Small architecture and scaled up to 24 billion parameters, is designed to handle complex queries and deliver conversational, multilingual support tailored for Indian users. Sarvam-M supports ten Indian languages—including Hindi, Bengali, and Gujarati—and excels in tasks like math problem-solving, programming, and machine translation.

The development of Sarvam-M followed a rigorous, multi-stage training process. Initial supervised fine-tuning provided high-quality, culturally sensitive data to ensure the model’s relevance for daily conversation and advanced reasoning tasks. This was complemented by reinforcement learning with verifiable rewards, further enhancing instruction-following, logical thinking, and programming skills. The final phase, inference optimisation, focused on increasing the model’s performance and speed through techniques like FP8 quantisation, albeit with ongoing challenges in handling high-traffic deployments. Sarvam-M is positioned for use in conversational Artificial Intelligence tools, educational platforms, virtual assistants, and machine translation services.

Benchmark tests revealed Sarvam-M’s strong performance in Indian languages and reasoning: it outperformed Meta’s Llama-4 Scout and matched larger models such as Llama 3.3 70B and Google’s Gemma 3 27B. Particularly, it demonstrated major improvements—over 86 percent—in handling hybrid math and romanised Indian language queries. However, its English knowledge scores were slightly lower than some counterparts. Despite technical achievements, the model faced muted reception upon release, with just 334 downloads on Hugging Face in the first two days, prompting criticism from parts of the developer and investor community. Sarvam AI’s founders and supporters defended the model’s benchmarks and training methodology, emphasizing its developmental significance for India’s sovereign Artificial Intelligence ambitions. Industry voices, including Zoho’s Sridhar Vembu, highlighted the need for patience and ongoing innovation, framing Sarvam-M as both a milestone and a foundation for further advancements in locally relevant Artificial Intelligence technology.

67

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.