In a recent podcast interview this past week, Sam Altman, CEO of OpenAI, has revealed some details regarding privacy that might appear quite concerning to the digitally literate. In the latest, Sam Altman has been quoted as saying that not only is he getting ready to launch the latest version of GPT, but also, relative to the capabilities of which he felt “scared” and “useless”. Notably, when asked about individuals using ChatGPT as a form of therapy or their blind dependence on it, Altman has said that he is scared and not yet aware of the negative impacts the AI engine may have in that department, “but I feel for sure it’s going to have some.”
This comment took the internet by storm, with opinions flooding in from every direction, on the implications of not having any legal protection for users’ personal information shared with a chatbot. On a podcast appearance he expounded that — like clients’ privileges are protected when talking to doctors and lawyers, meaning those details cannot be used against them in court — AI has no such policy framework as of yet, to protect personal information shared by users on ChatGPT. This means that any information shared by a user on ChatGPT can be used in court.
Chatbots vs. Counsellors: Understanding the Gap
Added to the fact that ChatGPT and OpenAI can hardly be trusted to give accurate, personalised, reliable and authentic responses when faced with such questions or prompts that may require legal protection, the matter gets decidedly murkier. Specifically for adolescents and children using ChatGPT to seek answers to delicate questions, or sharing their personal thoughts without an understanding of the lack of privacy or protection, the platform can be a legitimate risk, as has been admitted by the CEO Altman himself.
Dr K Rebecca Maria, counselling psychologist with 1Life Foundation, says a resource like ChatGPT will give you as direct and concise a response as possible, but a trained professional will aid through the nuanced rationale for user queries. She adds children with relationship anxiety, exam stress, loneliness are the most common callers at the helpline. “In a month we get around 10 callers with suicidal thoughts or suicidal tendencies. We are bound to keep all information shared with us confidential, but also inform authorities if the person’s life and wellbeing is at risk. That is not a responsibility ChatGPT operates with.”
AI Use and the Broader Mental Health Crisis
The rising mental health diagnoses and suicide rates among young adults with or without internet access makes this a concerning prospect. WHO has reported that the globally, the world loses a person to suicide every 40 seconds. As per the last report released by the National Crime Records Bureau for the year 2022, the suicide rate in India had gone up 12.4 deaths per 1,00,000 individuals. A separate study published in The Lancet, having surveyed around 8500 young students had found that nearly 50% of them have suicidal thoughts, with some of them even having attempted it in the past.
Currently, India has a National Suicide Prevention Strategy under the National Health Mission focusing on clinical interventions in suicide prevention, seeking to reduce suicides by about 10% by 2030. The government backs more than ten mental health counselling platforms that can be accessed by those undergoing anxiety, suicidal thoughts or any other mental health issues. Mental health counselling helplines, specifically for students, have been set up across the country, by NGOs, state police forces, and international agencies. It is in search of anonymity and an atmosphere of no judgment or repercussions, that youngsters turn to platforms like ChatGPT and helplines to divulge their personal thoughts, say experts. The difference is that with helplines the callers have a trained volunteer or a credible counsellor advising them or listening to them — that is not the case with ChatGPT, which can only give automated responses to the user.
A suicide prevention service in Hyderabad made the news earlier this year ahead of and during IPL season. They reported a 60% jump from previous years in just a matter of six months during the season, when suicide helplines were rushed with calls from betting addicts. In a similarly shocking report, when Surat police launched their suicide helpline in April this year (after taking cognisance of the alarming 1866 suicides recorded in three years), the very first day saw 19 calls to the line, an unexpected number according to the officers in charge. Other states like Odisha and Bengaluru have their own suicide prevention helplines.
CEO Sam Altman’s statement has come amidst an ongoing copyright lawsuit that he is also involved in with the New York Times, who have alleged that GPT generated outputs bear too much of a similarity to copyrighted NYT articles amounting to copyright infringement. Now with the CEO’s honest and public apprehension about the potential privacy issues that the engine is sure to run into in the near future — GPT users must reconsider the pros and cons of the AI search engine, and how user-friendly it really is.