HelpingAI was founded by three Indian technopreneurs in their late teens and early twenties — Nishith Jain (CEO), Varun Gupta (CTO), and Abhay Koul (chief Artificial Intelligence officer). The team says it bootstrapped early research and pooled technical talent to tackle three specific barriers to local Artificial Intelligence adoption: token inefficiency, empathy gaps in conversational models, and latency on low-infrastructure networks. Their flagship model, Dhanishtha, is framed as an engineering response to those constraints and a culturally aware alternative to Western llms for Indian enterprises and service sectors.
Dhanishtha departs from conventional transformer-only workflows by parallelizing reasoning and generation with what the company calls intermediate reasoning, or mid-response processing. The model uses traceable blocks for reasoning and blocks for sentiment-adaptive replies, which HelpingAI says supports a claimed 98.13 emotional intelligence score and improves traceability. Architecturally, Dhanishtha ships in scalable sizes from about 2 billion to 15 billion parameters, designed to allow mobile inference at the small end and enterprise workloads at the large end. HelpingAI lists training sources as multilingual web data, code repositories, domain-specific corpora and emotionally tagged conversational data. Optimization techniques highlighted include token compression (up to 5× fewer tokens compared with some competitors), early stopping, gradient clipping, hyperparameter tuning, in-memory caching and auto-scaling cluster management. Benchmarks in the article claim Dhanishtha is roughly 4× faster than a competitor named DeepSeek in inference.
The startup positions Dhanishtha for enterprise automation, healthcare and telemedicine triage, adaptive education tools, retail personalization, SMB developer tools and other verticals where cultural and linguistic nuance matters. HelpingAI identifies three scaling challenges in India — limited high-end compute, scarce specialist talent and funding gaps beyond MVP — and outlines countermeasures: efficiency-first model design, community engagement for talent pipelines, domain partnerships for monetization and last-mile integration tools for non-technical adopters. The road map includes scaling parameter counts, adding vision and speech modules, ecosystem partnerships, domestic talent development and embedding visible reasoning steps for enterprise auditability. If the performance and efficiency claims hold at scale, HelpingAI argues Dhanishtha could become a credible homegrown rival to global llms in targeted Indian and emerging markets.