At a time when artificial intelligence (AI) is framed either as a technological arms race or a trillion-dollar investment opportunity, Prime Minister Narendra Modi’s articulation of the “MANAV” doctrine at the India AI Impact Summit on Thursday stands out for a different reason: it attempts to anchor AI not in power or profit, but in people. MANAV—meaning “human”—is not merely a clever acronym. It is a philosophical positioning.

Each of the letters bonded together form an attempt to define AI not just as code, but as consequence. This matters as the global AI conversation beyond the usual talkfests is dangerously tilted. On the one hand are technology giants racing to build ever larger models. On the other are governments scrambling to regulate systems they barely understand. In between sits the citizen—whose data fuels the models, whose jobs may be displaced, and whose freedoms may be shaped by algorithmic decisions.

Beyond Code

The MANAV doctrine seeks to recentre that citizen. The first pillar—moral and ethical systems—acknowledges that AI is not neutral. Algorithms embed the assumptions, biases, and blind spots of their creators. Without ethical guardrails, AI can amplify discrimination, spread misinformation, or erode privacy at scale. By explicitly foregrounding ethics, India is signalling that technological capability alone cannot define progress.

The second pillar—accountable governance—is equally critical. AI systems increasingly influence credit decisions, insurance premiums, hiring, welfare delivery, and policing. Transparent rules, audit mechanisms, and robust oversight are essential to ensure that algorithmic power does not become unaccountable power. In this sense, the doctrine aligns with the broader need for regulatory clarity, without stifling innovation.

Perhaps the most geopolitically significant element is “N” for national sovereignty. In the age of AI, sovereignty extends beyond borders to data, compute infrastructure, and model ownership. Whose data trains the system? Who controls the servers? Whose laws apply when harm occurs? By emphasising sovereignty, India is asserting that digital dependence is a strategic vulnerability.

Nations that do not build or at least meaningfully shape their AI stacks risk ceding influence over both economic value and societal norms. Finally, “V” for values-driven development underscores that AI must serve developmental priorities. For a country of India’s scale and diversity, AI cannot be a luxury tool for the elite. It must power agriculture advisories in vernacular languages, assist small businesses, strengthen public health systems, and support inclusive education. In other words, it must become a digital public good, not merely a commercial product. All this can happen, as Modi said, if humans and intelligent systems co-create, co-work, and co-evolve the new AI era.

From Doctrine to Design

The doctrine’s importance lies not in rhetoric but in orientation. It offers a framework through which policy, investment, and regulation can be aligned. India’s digital public infrastructure—from Aadhaar to the Unified Payments Interface—has demonstrated that scale and inclusion can coexist. Applying similar thinking to AI could enable population-scale use cases without sacrificing accountability.

Yet doctrine must translate into design. Ethical principles must shape procurement rules. Sovereignty must inform data localisation and compute strategy. Accountability must guide enforcement, not just consultation. Without institutional follow-through, MANAV risks becoming a slogan rather than a standard. Still, in a world where AI is often discussed in terms of dominance, disruption, or decoupling, articulating a human-centred doctrine is itself consequential. It reframes the debate from “Who wins the AI race?” to “Whom does AI serve?” That shift in question may well prove to be the most important intervention of all.