Yotta Data Services is investing over $2 billion to build one of Asia’s largest AI compute hubs in India by August 2026. This facility will be powered by Nvidia’s latest Blackwell Ultra GPUs, with a major AI supercluster deployed at its data centre at Greater Noida and additional capacity in Mumbai. The project aims to significantly increase India’s AI computing power, targeting both enterprise needs and the National AI Mission,” Sunil Gupta, co-founder, CEO & MD, Yotta Data Services, told Sudhir Chowdhary at the India AI Impact Summit in New Delhi. Excerpts:

India is racing to add GPUs at scale. Where does Yotta sit today in terms of installed and contracted AI compute, and how much of it is actually available to Indian startups and researchers?

India has moved from near-zero GPU availability two years ago to training sovereign AI models today. Yotta has played a central role in that shift and now provides nearly 75% of India’s available GPU compute. Under the IndiaAI Mission, we are empanelled with the government, which procures compute from us on a usage basis and allocates it to startups and institutions such as Sarvam, IIT Bombay and Bhashini. Roughly three-fourths of the GPUs deployed under the Mission come from our infrastructure.

We operate about 10,000 advanced GPUs and are scaling to 40,000 in the coming months, including latest Blackwell B300. This expansion is part of our $2 billion investment at our Greater Noida hyperscale data centre to build the compute backbone required for AI on a population scale.

Hyperscalers dominate cloud AI globally. How does Yotta compete on cost, performance and availability when firms like AWS, Microsoft and Google are deploying tens of thousands of GPUs?

Hyperscalers built global clouds on general-purpose compute, but AI workloads require specialised GPU architecture. A new category called neoclouds has emerged globally, focused on GPU clusters and GPU-as-a-service. Yotta is building that neocloud model for India and the wider APAC region.

We are deploying one of the largest Nvidia GPU clusters in APAC, including B200 and B300 systems with Nvidia’s AI software stack such as NIMs and Nemotron. Rather than competing head-on, hyperscalers partner with us. Microsoft runs Azure AI services on our GPUs, and we work with AWS for sovereign workloads. We provide sovereign, high-performance GPU infrastructure, while hyperscalers bring software and orchestration layers, ensuring cost efficiency, availability and data sovereignty.

Is India at risk of building data centres without workloads, or facing AI demand without enough compute? Where is the real bottleneck today?

India’s real gap was never demand; it was compute. The country has the talent, ranks high on global AI skill indices, has vast and diverse datasets, multiple languages, and one of the world’s largest digital populations. Demand for AI, both consumer and enterprise, is inevitable at population scale, just as seen with mobile data and UPI. What was missing was large scale GPU infrastructure. By building supply first, we unlocked latent demand. Today, leading Indian models are trained locally, and as inferencing and enterprise adoption increase, compute availability will remain critical.

Is Yotta building primarily for Indian demand, or, are you targeting global AI workloads too?

Both. Our philosophy is: from India, for India, and for the world. AI infrastructure is capital-intensive and complex, so only a few global hubs will emerge. India will be one of them. Even Nvidia sees us as a reference architecture partner for serving the broader APAC and Global South markets. We are enabling Indian startups, hosting global models serving Indian users, and already serving international customers. Our clusters are designed not just for domestic needs but also for global workloads.

With players like Adani announcing large AI data centre investments, do you see a risk of overcapacity?

Not at all. Different players operate at different layers of the stack. Companies like Adani focus on the physical layer: land, buildings and power. Players like Yotta build the upper layers of the stack, GPU compute, cloud and AI services. The ecosystem is complementary.

The existing demand gap is massive. Indians generate and consume about 20% of the world’s data, but only 3% is hosted domestically. That tells you how much capacity still needs to be built. With AI adoption expanding into voice, language, agriculture, healthcare and education, potentially serving 1.4 billion people, any capacity we build will get absorbed. For the next decade, supply will struggle to keep up with demand.

AI data centres are extremely energy-intensive. How is Yotta ensuring reliable, low-cost, green power to stay globally competitive?

There’s a misconception that India lacks power. We are actually a power surplus country. The challenge is last-mile distribution. We addressed this early by investing in dedicated high-voltage lines, building our own substations and even securing power distribution licenses. This allows us to source power directly and reliably.

As a result, 80% of our Navi Mumbai campus runs on green energy, and our Greater Noida campus runs on 100% green power, sourced from solar, wind, hydro and biogas (on current load). For us, sustainable, reliable and cost-efficient power isn’t optional, it’s foundational to build a globally competitive AI infrastructure.