In the Economic Survey 2026, the government has stated that India has adopted a ‘bottom-up approach’ to Artificial Intelligence (AI), prioritising Small Language Models (SLMs) over the “frontier model supremacy” (or LLMs) pursued by the West. This strategy is essentially a cleverly thought-out response to India’s unique resource constraints in the AI space and a strategic move to ensure technological autonomy under the ‘Atmanirbhar Bharat’ initiative. 

The report states that by focusing on application-specific, efficient models, India aims to transform its role from a passive consumer of technology into a source of global reliability and value. At the moment, the country is relying on Western AI models to satisfy its requirements, which makes it overtly reliant on foreign technology in the new era of AI. 

Navigating resource constraints of LLMs smartly

The development of advanced Large Language Models (LLMs) is increasingly capital, compute, and energy-intensive, as stated in the document. India faces significant binding constraints that make the Western LLM model difficult to replicate:

Compute shortages: Access to high-end processors (GPUs) is limited by export restrictions and concentrated global supply chains.

Infrastructure stress: Massive AI data centers consume as much energy daily as a rocket launch and require up to 20 lakh litres of water per day, which is something the government is concerned about at the moment.

Economic risk: Pursuing scale for its own sake carries high opportunity costs and risks financial contagion from highly leveraged infrastructure bets, states the document.

In contrast, Small Language Models (SLMs) are computationally efficient, easier to fine-tune, and can run on local hardware like smartphones. Modern smartphones offer ample resources to run advanced SLMs to do certain focused tasks and have lower reliance on data connectivity. This allows India to scale up its AI infrastructure without requiring proportionate expansions in resource-intensive data centers, which is a requisite for LLMs.

India’s ‘bottom-up approach’ advantage

India’s strategy for SLMs leverages its deep technical talent pool and the world’s fastest-growing community of open-source developers. Open models are consistently closing the performance gap with proprietary systems, thus offering a way for domestic innovators to build without high entry barriers. 

The report also states that instead of relying on general-purpose models, India focuses on frugal, problem-driven AI that addresses immediate societal needs. Some of the examples include:

Linguistic inclusion: Initiatives like Bhashini and AI4Bharat enable voice-first digital services in native languages.

Sectoral solutions: AI-enabled thermal imaging for healthcare in the South and real-time landslide alerts in the Himalayas demonstrate locally grounded ingenuity utilising SLMs.

India is also protecting the social contract

A central concern for India is the employment paradox of AI. While LLMs in advanced economies risk displacing white-collar tasks – as seen lately with widespread job layoffs across various industries – India’s bottom-up approach seeks to ‘augment human value,’ i.e., add to the current societal norms rather than replace human-based jobs. 

The proposed AI Economic Council also aims to calibrate the pace of adoption, ensuring technology remains subordinate to human welfare and social stability. By fostering sector-specific AI, India can upgrade itself from being an “IT back office” to an “AI front office,” creating more dignified, high-skill employment opportunities for its youth.

Data sovereignty and governance a priority

Data is the “core factor of production” in the AI era, states the survey. India’s proposed data governance framework shifts from rigid localisation toward accountable portability. This ensures that while data can flow across borders, the economic value created from Indian data remains in the country through mandatory mirrored copies and local model fine-tuning.