The Mahabharata is a timeless epic, which offers wisdom applicable across centuries, including in governance, diplomacy, and warfare. Among its many tales, one particular episode from the battle of Kurukshetra echoes eerily in today’s digital age of war and misinformation.
During the height of the Kurukshetra war, the Pandava brothers were left floundering before their teacher, Dronacharya, whose unrelenting assault threatened to turn the tide in favour of the Kauravas. Even Arjun, the fiercest of warriors, could not face his mentor’s prowess. Krishna, ever the strategist, knew that defeating Dronacharya through might alone was futile. Instead, he advised psychological warfare. He proposed a ploy to break Dronacharya’s spirit: they would spread the false news that his beloved son Ashwatthama had been killed.
To maintain a semblance of truth, they exploited a technicality. Bhima killed an elephant named Ashwatthama, and Yudhishthira, known for never lying, announced that “Ashwatthama is dead”, only to murmur afterward that it was the elephant. Deceived by the trustworthiness of the messenger, Dronacharya laid down his arms and was slain. The victory, though strategic, was marred by deceit. This instance reminds us that misinformation, regardless of morality, has been a tactical tool in warfare for millennia.
Fast forward to the present, where the current India-Pakistan conflict — a tenuous ceasefire was declared over the weekend — is also simultaneously being played 24/7 in the digital arena. Just as Krishna exploited narrative manipulation to gain advantage, modern actors, both state and non-state, are using social media to shape public perception, disorient the enemy, and steer national and international responses.
The government has had to confront a surge of false narratives across platforms such as X, Telegram, and WhatsApp. The Press Information Bureau’s (PIB) fact check unit is busy debunking misinformation. For instance, it has issued over 21 clarifications in relation to the ongoing military operations, including misinformation surrounding the Pahalgam attack.
Beyond clarifications, advisories have been issued urging citizens to refrain from sharing unverified content. Social media companies like Facebook, X, and YouTube have been instructed to remove content flagged as false by government fact-checkers. Failure to comply within 36 hours could strip these platforms of their safe harbour protections.
In cases deemed more severe, the government has blocked over 8,000 social media accounts that were disseminating false or inflammatory content. The ministry of information and broadcasting has also advised over-the-top platforms to remove content of Pakistani origin, citing national security. Around 16 Pakistani YouTube channels, with a combined subscriber base exceeding 63 million, have been banned for spreading provocative and misleading narratives. While the legal framework is active, the real-time nature of digital misinformation demands quicker, smarter responses.
This is where the scale of the problem becomes historically unprecedented. Scholars like Eric Hobsbawm have highlighted how propaganda acts as an essential aspect of modern warfare. But today’s misinformation landscape is driven not by centralised states alone but by a vast array of decentralised, algorithm-powered networks. As Niall Ferguson outlined in his book, The Square and the Tower, that the power has shifted from hierarchies to networks, enabling rapid, global dissemination of narratives, truthful or otherwise.
Yuval Noah Harari, in his books Nexus and 21 Lessons for the 21st Century, has framed today’s conflicts as battles not just for territory, but for informational supremacy. Digital warfare, he argues, is inherently destabilising because of the speed and scale at which it can sway opinions. Sensational content thrives in algorithmic environments, often going viral before fact-checkers can catch up. In this setting, misinformation is not merely propaganda, it becomes a weapon that can alter the course of real-world events.
While India framed its precision strikes in response to the Pahalgam killings as non-escalatory and targeted at terror infrastructure, Pakistan has countered with its own narrative. Claims and counterclaims have flooded social media. Artificial intelligence (AI)-generated videos, fake WhatsApp messages, and deepfake content have blurred the lines between fact and fiction. In this haze, even verified information struggles to retain its authority.
The cacophony of competing narratives, manipulated visuals, and synthetic texts has transformed war into a psychological battlefield where perception often trumps reality. The urgency to address this threat cannot be overstated. Governments must now operate at a pace matching, or exceeding, that of digital misinformation.
The challenge is tougher than at any time in history. While misinformation has always been a part of warfare, what we face today is an entirely new beast. Social media and the Internet have shattered geographical boundaries. Misinformation can now influence public sentiment, destabilise governments, and escalate conflicts, all within minutes.
The government’s efforts, from the PIB’s fact-checking, legal amendments, to platform advisories, are commendable, but speed is key. Misinformation spreads virally, often before official agencies can verify, respond, or clarify. This calls for not just stronger regulation, but also greater investment in real-time monitoring tools, AI-based detection systems, and international coordination with allies and platforms.
In the age of digital warfare, Krishna’s ancient tactic is being played out again, but this time on screens, in networks, and at speeds unimaginable even a decade ago. The only difference is that the consequences of believing a false tale today could trigger something far more catastrophic than the fall of a single warrior. The battle for truth, it turns out, is the battle of our time.