By Rohit Kumar Singh

The heartbreaking story  of Sewell Setzer III, a 14-year-old boy in the US who took his own life after forming an emotional attachment to an artificial intelligence (AI) chatbot on Character.AI, has sparked debates about the ethical and legal responsibilities of AI developers. In the Indian context, this case raises profound questions about how our legal system should respond to similar tragedies involving AI.

AI’s role in mental health: A double-edged sword

AI-powered chatbots are increasingly being used as companions, often designed to simulate human interaction and provide emotional support. While such systems can be beneficial for individuals seeking companionship or mental health assistance, they also pose significant risks, particularly when interacting with vulnerable populations like teenagers.

In Sewell’s case, his mother believes that his obsession with an AI chatbot based on a fictional character from Game of Thrones contributed to his mental decline. The chatbot responded in ways that may have deepened his emotional distress rather than alleviating it. This tragic incident highlights the potential dangers of emotionally intelligent AI systems that are not equipped to handle complex human emotions responsibly.

India’s legal landscape: A lack of specific regulation

In India, the legal framework surrounding AI is still evolving. Currently, there are no specific laws governing the use of AI in emotionally sensitive contexts. However, existing laws such as the Information Technology Act (IT Act), 2000, and the Consumer Protection Act could potentially be invoked in cases where harm is caused by AI-based applications.

Under the IT Act, intermediaries (which could include platforms hosting AI chatbots) are granted certain protections from liability as long as they do not knowingly allow harmful content or interactions. However, this protection becomes murky when dealing with AI systems that can engage in personalised and emotionally charged conversations. If an AI chatbot were found to have contributed to a user’s mental distress or suicide, could the platform be held liable under Indian law? The answer is far from clear.

Negligence and duty of care: Can developers be held accountable?

In India, negligence is typically defined as a breach of duty that results in harm to another person. To establish negligence, it must be proven that the defendant owed a duty of care to the plaintiff; the defendant breached that duty; the breach caused harm or injury.

In the case of AI chatbots, one could argue that developers owe a duty of care to users who may form emotional attachments to these systems. If the chatbot’s responses exacerbate a user’s mental health issues or fail to direct them toward professional help when needed, this could potentially be seen as a breach of duty.

However, proving causality between an AI interaction and a tragic outcome like suicide is legally complex. In India’s current legal environment, it would be difficult to hold developers directly responsible unless there was clear evidence that they had knowledge of the risks and failed to take appropriate action.

The need for regulatory safeguards

India must work on specific regulations governing AI systems’ ethical use in sensitive areas like mental health. Potential regulations could include:

• Mandatory safeguards: Developers could be required to implement safeguards in chatbots that detect signs of distress or suicidal ideation, and direct users toward professional help.

• Transparency requirements: Platforms should be transparent about how their algorithms work and what data is being used to simulate emotional responses.

• Ethical guidelines: Just as doctors and therapists are bound by ethical guidelines when dealing with patients, developers creating emotionally intelligent AI systems should follow ethical standards designed to protect users from harm.

Comparing international approaches: Lessons for us

The US, where Sewell’s tragedy occurred, is also grappling with how best to regulate AI systems. There are calls for stricter oversight and clearer guidelines on how emotionally intelligent chatbots should interact with users. In Moffatt vs Air Canada, the British Columbia Civil Resolution Tribunal found Air Canada liable for misinformation given to a consumer by an AI chatbot on its website, and awarded damages. India can learn from global examples by proactively introducing regulations that ensure AI systems prioritise user safety — especially when interacting with vulnerable populations like children and teenagers. This could include requiring platforms hosting AI chatbots to conduct regular audits and risk assessments or mandating that these systems include built-in mechanisms for detecting harmful behaviour.

Call for collective responsibility

While regulatory frameworks are essential, addressing tragedies like Sewell Setzer III’s requires a collective effort from all stakeholders. Parents need to monitor their children’s online interactions closely, especially when using emotionally intelligent AI systems. Educators and mental health professionals must raise awareness about the potential risks posed by these technologies.

As we continue integrating AI into our lives in increasingly intimate ways, it is imperative that we establish legal frameworks and ethical guidelines that safeguard users — particularly those who are most vulnerable — from unintended consequences. The tragic death of Sewell Setzer III serves as a sobering reminder of the power technology holds over our lives — and the urgent need for accountability in its development and deployment.

The writer is former secretary, department of consumer affairs, & member, National Consumer Disputes Redressal Commission.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.