Mumbai: As India debates its place in the global AI race, the most compelling arguments are no longer coming from chip labs or research benchmarks, but from boardrooms and builder communities grappling with execution. That reality was on display at Wharton India Economic Forum’s (WIEF) 30th edition at The St Regis Mumbai on 10 January, where founders and industry leaders argued that India’s AI future will be decided less by frontier models and more by its ability to deploy AI at scale, reliably and profitably, across the world’s enterprises.
The WIEF has, over three decades, emerged as one of the most influential global platforms for examining India’s economic trajectory and its evolving role in the world. Founded in 1996 and organised by students of The Wharton School, part of the University of Pennsylvania in the US, the forum brings together policymakers, business leaders, investors, entrepreneurs and students to engage in rigorous conversations on growth, innovation and capital.
Its 30th edition under the theme India’s Vision for Innovation and Growth reflected both scale and urgency with respect to India’s AI race. WIEF 2026 featured voices such as RPG Group’s Anant Goenka and Invest India’s Nivruti Rai, alongside founders and technologists debating how India can translate ambition into execution in an increasingly uncertain global economy.
A key thread running through the forum was a dedicated panel titled ‘India in the AI Race: Building, Scaling, and Competing Globally’, which brought together Amit Chadha, CEO and Managing Director of L&T Technology Services; Nipun Mehra, Founder and CEO of Neoflo.ai; and Manisha Raisinghani, Founder and CEO of SiftHub, in a conversation moderated by Sumangal Vinjamuri, Associate Vice President at Blume Ventures.
The discussion moved beyond headline-grabbing claims to examine where India realistically stands in the AI stack today, and where it can compete globally, focusing on execution, enterprise readiness and the hard work of scaling AI inside real-world businesses.
A unanimous belief during the discussion was that as countries pour billions into chips, sovereign models and frontier research, India’s advantage is shaping up very differently: deep application expertise and the ability to operationalise AI inside messy, real-world systems.
Application & talent
For now, India is not competing where the headlines are. It is not leading in semiconductors, Graphics Processing Units (GPUs) or large language models and the panel made no attempt to pretend otherwise.
“If you look at the entire stack, layer minus one is hardware and chips. We haven’t even started our journey there. Layer zero is LLMs (large language models). Again, we are nowhere close to what Gemini or ChatGPT are doing,” said Vinjamuri. “Application layer is where we are doing great, building in India and selling globally,” she added.
According to Raisinghani, realism matters. Rather than trying to replicate OpenAI or Google, Indian startups are increasingly focusing on building AI applications that solve specific enterprise problems, from finance and supply chains to customer support and engineering workflows, using global models as infrastructure. It is a strategy rooted in pragmatism, speed and market demand.
A second and often underestimated advantage, said the panellists, lies in India’s workforce composition. AI adoption is not just about model training or research labs, it requires integration into business processes, handoffs between humans and machines, and steadfast operational discipline.
The discussion noted that India’s decades-long dominance in IT services and BPOs, often framed as a low-margin, labour-arbitrage story, may now prove to be an unexpected advantage.
“About 25% of the world’s IT, engineering and BPO services are executed by India,” said Chadha. “That domain knowledge, if you take and leverage for AI, could become the turning point for India.”
The implied knowledge on how invoices actually get processed, how factories operate, how compliance breaks down in practice–when combined with AI, enables industry-specific solutions that generic models struggle to deliver, particularly in manufacturing, engineering, finance and operations-heavy sectors.
“There is one and only one country in the world that provides high-quality engineering talent as well as high-quality labour talent, the non-engineering variety, and is English-speaking. It’s our game to win,” added Mehra.
This allows Indian companies to move faster from prototypes to production, because building AI systems is as much about redesigning workflows, training people and integrating with legacy systems as it is about models and code.
The combination of engineers, domain specialists, operations staff and English fluency, allows Indian companies to deploy AI inside enterprises, not just demo it. As AI shifts from experimentation to execution, that capability becomes a structural edge.
Strategic vulnerability & sovereignty
Although optimism about application is tempered by concern about dependency, India’s AI ecosystem today relies heavily on foreign hyperscalers, GPUs and foundational models, a vulnerability that becomes acute in a geopolitically fragmented world.
“At some point there is a question: is there need for a sovereign model?” Mehra asked. “Because you do carry geopolitical risk when everybody’s uploading everything into OpenAI.”
He warned that India’s growing dependence on foreign models and infrastructure could become a serious vulnerability over time. While acknowledging the speed and capability of global foundation models, he raised a fundamental question about resilience and control in an increasingly chaotic geopolitical environment.
He argued that the case for a sovereign AI model is less about matching the performance of ChatGPT or Gemini today and more about safeguarding long-term national and enterprise interests. If sensitive data, critical systems and public-sector workflows are all routed through a handful of overseas platforms, India risks losing autonomy over its digital backbone.
In Mehra’s words, the danger lies in “what happens if someone turns off the tap”, a scenario that could disrupt government functions, regulated industries and core infrastructure. Even if India’s sovereign efforts lag initially, he suggested that building domestic data centres, compute capacity and foundational models with sovereign AI capabilities is an essential insurance against an uncertain future within a turbulent world.
Reliability over raw intelligence
Across the panel, reliability repeatedly surfaced as a decisive factor that will separate successful AI deployments from failed experiments, particularly in enterprise and regulated environments.
Chadha was the most explicit on this point. He emphasised that enterprises care far less about whether a model is marginally smarter, and far more about whether it works consistently, predictably and without costly errors.
He linked reliability directly to business risk. In sectors such as manufacturing, engineering services, financial compliance and critical infrastructure, AI errors can trigger regulatory penalties, safety issues and reputational damage. In this context, a less sophisticated but dependable system is far more valuable than a highly intelligent but unstable one.
Mehra reinforced this view from a startup and application-layer perspective, arguing that enterprise buyers are becoming increasingly sceptical of AI products that work well in demos but fail under real-world conditions.
“There’s a very big difference between somebody who is natively AI and somebody who has just sprinkled LLMs into their enterprise. Demos look great, but when you actually put these systems into production, reliability becomes everything,” Mehra said.
Examples of LLMs include models like ChatGPT and Google Gemini, which can write, summarise, analyse and converse, and often serve as the foundation on which AI applications and agents are built. For Mehra, reliability is inseparable from deep integration, process understanding and disciplined engineering.
Meanwhile, Raisinghani connected reliability to adoption. She noted that AI tools only gain trust when they behave predictably inside existing workflows, especially for non-technical users.
“Enterprises already struggle with changing processes. If the AI behaves unpredictably, it reinforces resistance instead of driving adoption. Reliability is what makes AI feel like infrastructure, not an experiment,” she noted.
Agent-based systems, she argued, will succeed only if enterprises can rely on them to execute tasks correctly every time, without constant human supervision or correction. Inconsistent or opaque behaviour, even if technically impressive, becomes a barrier rather than an advantage.
Taken together, the discussion pointed to a reframing of India’s AI ambitions. In an AI economy increasingly defined by deployment rather than discovery, India’s edge lies not at the frontier of invention, but at the frontier of execution.
“2024 was really the year of experimentation. Everyone wanted to try AI, everyone wanted a demo, everyone wanted to see what was possible, but very little of it was production-ready. 2025 was when companies started asking harder questions, how does this fit into our processes, how does it integrate with our systems, and what actually gives us ROI (return on investment),” Raisinghani said.
If 2024 and 2025 were about copilots and demos, the next phase will be about embedding AI directly into workflows. According to Raisinghani, that shift is imminent. “2026 is going to be the year of agents, productionising and operationalising agents. That’s when AI really starts running workflows, not just assisting humans,” she said.
ThePrint was the digital media partner for the 30th edition of the Wharton India Economic Forum.

