It is not a new phenomenon that technology has drawn people closer by transforming how they communicate and entertain themselves. From the days of SMS to team chat platforms, people have built new modes of conversation over the past two decades. But these interactions still involved people. With the rise of generative artificial intelligence, online gaming and viral challenges, a different form of engagement has entered daily life, and with it, new vulnerabilities.

Take chatbots for instance. Trained on vast datasets, they have become common tools for assisting with schoolwork, travel planning and even helping a person lose 27 kg in six months. In one study, titled Me, Myself & I: Understanding and safeguarding children’s use of AI chatbots, chatbots are being used by almost 64% of children for help with everything from homework to emotional advice and companionship. And, they are increasingly being implicated in mental health crises.

In Belgium, the parents of a teenager who died by suicide alleged that ChatGPT, the AI system developed by OpenAI, reinforced their son’s negative worldview. They claimed the model did not offer appropriate warnings or support during moments of distress.

In the US, 14-year-old Sewell Setzer III died by suicide in February 2024. His mother Jessica Garcia later found messages suggesting that Character.AI, a start-up offering customised AI companions, had appeared to normalise his darkest thoughts. She has since argued that the platform lacked safeguards to protect vulnerable minors.

Both companies maintain that their systems are not substitutes for professional help. OpenAI has said that since early 2023 its models have been trained to avoid providing self-harm instructions and to use supportive, empathetic language. “If someone writes that they want to hurt themselves, ChatGPT is trained not to comply and instead to acknowledge their feelings and steer them toward help,” the company noted in a blog post. It has pledged to expand crisis interventions, improve links to emergency services and strengthen protections for teenagers.

Viral challenges

The risks extend beyond AI. Social platforms and dark web communities have hosted viral challenges with deadly consequences. The Blue Whale Challenge, first reported in Russia in 2016, allegedly required participants to complete 50 escalating tasks, culminating in suicide. Such cases illustrate the hold that closed online communities can exert over impressionable users, encouraging secrecy and resistance to intervention. They also highlight the difficulty regulators face in tracking harmful trends that spread rapidly across encrypted or anonymous platforms.

The global gaming industry, valued at more than $180 billion, is under growing scrutiny for its addictive potential. In India alone, which has one of the lowest ratios of mental health professionals to patients in the world, the online gaming sector was worth $3.8 billion in FY24, according to gaming and interactive media fund Lumikai, with projections of $9.2 billion by FY29.

Games rely on reward systems, leaderboards and social features designed to keep players engaged. For most, this is harmless entertainment. But for some, the consequences are severe. In 2019, a 17-year-old boy in India took his own life after losing a session of PUBG. His parents had repeatedly warned him about his excessive gaming, but he struggled to stop.

Studies show that adolescents are particularly vulnerable to the highs and lows of competitive play. The dopamine-driven feedback loops embedded in modern games can magnify feelings of success and failure, while excessive screen time risks deepening social isolation.

Even platforms designed to encourage outdoor activity have had unintended effects. Pokemon Go, the augmented reality game launched in 2016, led to a wave of accidents as players roamed city streets in search of virtual creatures. In the US, distracted players were involved in traffic collisions, some fatal. 

Other incidents involved trespassing and violent confrontations, including a shooting, although developer Niantic later added warnings and speed restrictions.

Question of responsibility

These incidents highlight a recurring tension: where responsibility lies when platforms created for entertainment or companionship intersect with human vulnerability. 

Some steps are being taken. The EU’s Digital Services Act, which came into force in 2024, requires large platforms to conduct risk assessments on issues such as mental health and to implement stronger moderation. Yet enforcement remains patchy, and companies often adapt faster than regulators. Tragedies linked to chatbots, viral challenges and gaming remain relative to the vast number of users. But they show how quickly new technologies can slip into roles they were not designed to play. What is clear is that the stakes are high. As digital platforms become more immersive and AI more persuasive, the line between tool and companion will blur further. Unless companies embed responsibility into their design choices, and regulators demand accountability, more families may face a painful question: how a product marketed as harmless ended up contributing to a child’s death.