Ola founder Bhavish Aggarwal is investing $230 million into an AI startup he founded as the country pushes to establish itself in a field dominated by U.S. and Chinese firms.
Aggarwal is financing the investment in Krutrim largely through his family office, a source familiar with the matter told TechCrunch. In a post on X Tuesday, Aggarwal said Krutrim seeks to attract an investment of $1.15 billion by next year. He will seek to raise the remainder of the capital from outside investors, the source said.
The funding announcement coincides with unicorn startup Krutrim making its AI models open source and unveiling plans to build what it claims will be India’s largest supercomputer in partnership with Nvidia.
The lab released Krutrim-2, a 12-billion parameter language model that has shown strong performance in processing Indian languages. In sentiment analysis tests Krutrim shared Tuesday, it scored 0.95 compared with 0.70 for competing models, while achieving an 80% success rate in code generation tasks.
The lab has open sourced several specialized models, including systems for processing images, speech translation and text search, all optimized for Indian languages.
“We’re nowhere close to global benchmarks yet but have made good progress in one year,” wrote Aggarwal, whose other ventures have been backed by SoftBank, on X. “By open sourcing our models, we hope the entire Indian AI community collaborates to create a world-class Indian AI ecosystem.”
The initiative comes as India seeks to establish itself in an artificial intelligence landscape dominated by US and Chinese companies. The recent release of DeepSeek’s R1 “reasoning” model, built on a purportedly modest budget, has sent shock waves through the tech industry.
India last week praised DeepSeek‘s progress and said the country will host the Chinese AI lab’s large language models on domestic servers. Krutrim’s cloud arm began offering DeepSeek on Indian servers last week.
Krutrim has also developed its own evaluation framework, BharatBench, to assess AI models’ proficiency in Indian languages, addressing a gap in existing benchmarks that primarily focus on English and Chinese.
The lab’s technical approach includes using a 128,000-token context window, allowing its systems to handle longer texts and more complex conversations. Performance metrics published by the startup showed Krutrim-2 achieving high scores in grammar correction (0.98) and multi-turn conversations (0.91).
The investment follows January’s launch of Krutrim-1, a 7-billion parameter system that served as India’s first large language model. The supercomputer deployment with Nvidia is scheduled to go live in March, with expansion planned throughout the year.