By Srinath Sridharan

The global discourse surrounding artificial intelligence (AI) has largely been dominated by an obsession with technological achievement. Conversations brim with excitement about creating smarter systems, optimising computational efficiencies, and redefining innovation’s boundaries. Yet, a troublingsilence looms large — one that sidesteps the profound ethical and social questions AI raises. Are we, as a global society, so captivated by the “can we?” that we’ve lost sight of the equally important “should we?”.

Science and technology have undeniably propelled humanity forward, yet the rise of AI poses challenges that transcend their domains. At its core, AI forces us to grapple with questions about trust, justice, and fairness. Can an algorithm genuinely understand morality, or does it simply simulate ethical reasoning through statistical models? After all, as human society with centuries of ex istence, we still have not understood morality as binary. These are not technical conundrums, for they are deeply human dilemmas. Yet, they are too often sidelined, overshadowed by metrics of policy optimisation and regulatory scale.

The technologists and policymakers shaping AI frequently operate within a framework of accuracy and performance. But this focus begs critical questions: Accuracy for whom? Performance to what end? The assumption that technological progress automatically translates into societal advancement is dangerously naive. When algorithms determine access to healthcare, credit, or even justice, they wield moral authority far beyond their intended design. The societal consequences of these decisions demand scrutiny — scrutiny that technologists are neither trained nor equipped to provide.

Globally, attempts to regulate AI reveal a fragmented landscape. The European Union’s AI Act is among the most comprehensive efforts, categorising AI systems by risk and enforcing strict requirements on high-risk applications. The US has taken a more sector-specific approach, while China, with its vast data resources, has embedded AI deeply into surveillance and state control. India, meanwhile, stands at a crossroads, attempting to balance aspirations for technological leadership with the need to address its unique challenges of data sovereignty.

In a country defined by its sheer size and complexity, how do we ensure fairness in AI systems when digital literacy and access are still uneven? How do we protect individual consent in a socio-economic environment where power dynamics are deeply entrenched? These are questions that must inform India’s regulatory frameworks.

Adding to this complexity is the glaring absence of diverse voices in the AI debate. Industry leaders and governments often reduce AI to a geopolitical chess piece, touting it as a tool for economic dominance or national pride.

Consider how harm and benefit are often assessed in AI. Utilitarian principles dominate, measuring outcomes by the greatest good for the greatest number. But is this approach sufficient when fundamental rights are at stake? If an AI system benefits 99% of users but discriminates against 1%, can its existence be justified? 

Moreover, the ethical challenges of AI are evolving at a pace that far exceeds our ability to regulate them. Autonomous weapons, deepfake technologies, and pervasive surveillance are realities.

AI implications extend beyond governance to consumerism itself. AI is already reshaping how we consume, with algorithms subtly influencing the products we buy, the news we read, and even the values we hold. This transformation is especially significant for India, where more than half the 1.4 billion population is under 30. As digital natives, this demographic will drive consumption patterns.

AI systems can nudge preferences, shape aspirations, and manipulate choices, often in ways that serve corporate profits over individual well-being. For a young India, this raises troubling questions about agency and freedom.

The impact of AI on labour markets further complicates the picture. Automation will challenge traditional job structures, creating a dual imperative for upskilling and redefining work itself. India’s demographic dividend could quickly become a liability if its workforce is not prepared for an AI-dominated future. This preparation must go beyond technical skills, fostering critical thinking, ethical awareness, and adaptability.

The rise of AI also forces us to confront uncomfortable truths about ourselves. Are we so enamoured with our technological prowess that we overlook its limits? In our quest for efficiency, are we neglecting the larger societal challenges AI creates? It is not enough to regulate AI after its deployment or rely on superficial ethical guidelines. Ethical considerations must be embedded into every stage of AI development.

This shift demands a multidisciplinary approach. Philosophers must grapple with questions of machine morality. Anthropologists must study how AI reshapes cultures and social norms. Artists and writers must imagine alternative futures, envisioning technologies that prioritise humanity over efficiency.

The way AI will shape up, and probably even steer humanity, will not be merely a reflection of our technological capabilities. It would be the ultimate litmus test for our moral courage. India faces a delicate balancing act: crafting stabilising AI regulations while avoiding the trap of ceding emerging technology leadership to entrenched Western dominance, as happened with Web1 and Web2. If losing technological sovereignty is the price of ethical governance, does India truly have a choice — or is this a Hobson’s choice in disguise?

The writer is corporate advisor & independent director on corporate boards.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.