Fractal, a Mumbai-based artificial intelligence firm, has released Fathom-R1-14B, an open-source large language model designed for advanced mathematical reasoning. Boasting 14 billion parameters, Fathom-R1-14B demonstrates performance surpassing that of models like o1-mini and o3-mini, and closely approaching o4-mini levels. The model´s release aligns with Fractal´s vision to kickstart the development of indigenous reasoning models under the IndiaAI mission, which aims to create scalable artificial intelligence infrastructure for the country.
Accessible on Hugging Face and with its codebase available on GitHub under an MIT license, Fathom-R1-14B provides robust benchmarking results. On olympiad-level tests such as AIME-25 and HMMT-25, the model achieves 52.71% and 35.26% Pass@1 accuracy, respectively. With enhanced inference-time computation (cons@64), these scores rise to 76.7% and 56.7%. The model maintains a 16K context window, making it practical for complex reasoning tasks. It is built upon Deepseek-R1-Distilled-Qwen-14B and leverages supervised fine-tuning, curriculum learning, and model merging to maximize accuracy and generalization.
Fractal has also launched a reinforcement learning variant, Fathom-R1-14B-RS, which attains comparable results using a blend of reinforcement learning and supervised fine-tuning techniques. The release showcases the company´s continued commitment to open-source advancements and complements earlier efforts such as Vaidya.ai, a multi-modal artificial intelligence platform for healthcare support. Fractal’s initiative runs parallel with other efforts in the Indian artificial intelligence ecosystem, such as Sarvam’s unveiling of the Sarvam-M hybrid language model. These milestones underscore India’s accelerating role in cutting-edge artificial intelligence research and the drive towards self-reliant model development for national and global use.