Ihave anxiety issues—if you happen to give this prompt to ChatGPT, expect to get an empathetic response. “I’m sorry to hear that… Anxiety can be difficult to manage,” it says, while offering “a few general tips to cope”. “Remember, these suggestions may not work for everyone, and it’s essential to find what works best for you,” it adds a disclaimer at the end, recommending professional help. In fact, Google’s chatbot Bard spits out an almost identical response.
Google had long been the handy medico of sorts for looking up symptoms, even self-diagnosing and finding remedies. However, a brief scroll on social media shows that generative AI (artificial intelligence) chatbots are coming after ‘Dr Google’.
“I’ve already had multiple friends tell me that they use ChatGPT as therapy,” a user wrote on Twitter. Also, a recent study in the United States found OpenAI’s chatbot to be as good, or even better than, doctors at responding to medical queries. Not only were ChatGPT’s responses found to be of better quality than that of medicos but were more empathetic too. “For the first time, we compared AI and physicians’ responses to the same patient messages, and AI won in a landslide,” John Ayers, who led the study, said, as per media reports.
While the use of technology for seeking medical advice is nothing new, the trend got consolidated during the Covid pandemic when mankind became more attuned to tech and virtual consultations became the norm.
While it is too soon to identify a distinct trend given the “technology itself being rather new,” according to Joydeep Ghosh, partner, industry leader—life science & health care, Deloitte Touche Tohmatsu India LLP, “specific generative AI tools are already being introduced for the medical fraternity.” He gives the example of Doximity, a US-based social networking platform, which “has introduced DocsGPT in a beta version to start with. The application use cases, as of now, aim more towards freeing up time and energy of medical practitioners from doing repetitive and template-able administrative tasks like letters, faxes, appointments, certificates, et al, and not core patient interaction, examination or therapy,” the analyst says.
Interestingly, while users are fiddling with this brand-new technology for getting answers to their medical queries, medical professionals are not far behind and are discovering its use case for their profession.
“AI platforms can complement existing mental health services by offering additional resources and support to those in need, particularly in scenarios where immediate assistance or widespread access to mental health professionals may be limited,” says Dr Jyoti Kapoor, founder-director and senior psychiatrist, Manasthali, a mental health and wellness platform.
Notably, even before generative AI made a massive splash in the tech and non-tech worlds alike, several companies were using AI chatbots to offer mental health services. Examples are Wysa, Woebot, Heyy, etc.
“Unlike ChatGPT and Bard, which use large language models (LLMs) to generate responses to prompts, Wysa does not use generative AI to respond. The responses are written by a team of conversational designers and vetted by clinicians to ensure that it is empathetic, non-judgmental, and clinically safe for the users,” says Megha Gupta, head of AI at Wysa, which has a trademark penguin feature that the users can chat with.
While the responses of Wysa and similar platforms are vetted by experts, the same is not the case with ChatGPT or Bard.
“While these platforms have the potential to provide quick and accessible information, it is important to recognise their limitations. AI-generated platforms may lack the ability to understand the full context of an individual’s health condition, medical history, and unique circumstances,” says Dr Kapoor. “Furthermore, the accuracy and reliability of the information provided by these platforms can vary significantly, as they heavily rely on the quality and diversity of the data they are trained on,” she adds.
At the same time, “ChatGPT can be useful for patients to get valuable information, aiding their understanding of the problem. For example, if someone says, “I have osteoarthritis of the knee,” they can delve into the details and read up on the subject. It can also assist in understanding the condition. However, when it comes to making an accurate diagnosis, it can become a tedious task. Symptoms alone don’t always lead to a definitive diagnosis. Clinical examination of the patient is crucial in determining the problem,” says Dr Debashish Chanda, lead consultant— orthopaedics, CK Birla Hospital, Gurugram.
While it is widely known that AI chatbots cannot always be considered viable sources of information, that has not stopped users to log in to such platforms to find answers to medical queries. What really seems to have caught the eye is the non-judgmental and empathetic manner in which the bots respond.
However, not practising caution can be concerning, given that currently there are no guardrails. Earlier this year, a Belgian man reportedly died by suicide after talking to AI chatbot Eliza about climate change. Notably, it is created by a US-based startup that uses GPT-J, an alternative to OpenAI’s GPT-3 language model. “Without these conversations with the chatbot, my husband would still be here. I am convinced of that,” the man’s wife told a media outlet. According to media reports, in the lead-up to his death, he was increasingly anxious about climate-related issues and had been talking to the chatbot for about six weeks. In another instance, Microsoft’s Bing search engine, which is now powered by AI, asked a man to leave his wife.
People are using ChatGPT because they are fascinated. It gives them easy answers too. But that has not been validated, and its own creators will tell you that it should certainly not be used for medical purposes, says Dr Aashish Contractor, director—rehabilitation and sports medicine, Sir H N Reliance Foundation Hospital.
Not just that, the data privacy laws are yet to be clearly defined when it comes to this rapidly developing tech. “In India, provisions are being introduced in the Privacy Bill that could address some of the privacy controls required for generative AI,” Ghosh says.
While generative AI might not be the best tool in the hands of users seeking medical advice and therapy, doctors are increasingly looking at ways to incorporate it into their work.
According to a recent report in an international publication, doctors are using ChatGPT to help them communicate compassionately, such as while breaking bad news, expressing concern about patients’ health, and explaining medical recommendations better.
“100%,” says Dr Contractor on if he sees the use of generative AI in his profession. “Some aspects of artificial intelligence are definitely going to be used in the medical profession extensively,” he says. “AI is a huge database, capable of connecting and integrating a lot of information at lightning-fast speed and therefore may help the doctor further open up their capabilities, especially in the light of newer discoveries in medical science,” adds Dr Kapoor.
At the same time, while Dr Chanda sees some “risks”, he believes that the future holds potential advancements in this area. “Moreover, I think it can be beneficial for both patients and doctors to use tools like ChatGPT to gain insights into various medical specialties. For example, an orthopaedic doctor can utilise the tool to inquire about general medicine and enhance their understanding in different areas. It is important to recognise that it is practically impossible for any individual to have expertise in all subjects, and ChatGPT can serve as a valuable resource in bridging those knowledge gaps,” he adds.
Similarly, Ghosh says that the “current usage, as being envisaged now, will help medical practitioners engage more productively in their core therapeutic work, with some of their administrative burdens taken over by the new technology. Future usage, with specific GenAI tools for health care coming up, could offer possibilities of deeper involvement in the diagnostic and therapeutic processes by assisting trained medical practitioners in making quicker and more accurate decisions.”
Here, Dr Kapoor highlights that the use of generative AI platforms in healthcare should be seen as “supportive” and “not as a replacement” for professional medical expertise. “So, AI should serve as a tool to doctors rather than take decisions on their behalf,” she adds.
AI and robotics in healthcare
Keeping well
One of AI’s biggest potential benefits is to help people stay healthy. The use of AI and the Internet of Medical Things in consumer health applications is already helping people
Early detection
AI is already being used to detect diseases, such as cancer, more accurately and in their early stages. The proliferation of consumer wearables and other medical devices combined with AI is also being applied to oversee early-stage heart disease
Treatment
Beyond scanning health records to help providers identify chronically ill individuals who may be at risk of an adverse episode, AI can help clinicians take a more comprehensive approach for disease management, better coordinate care plans and help patients to better manage and comply with their long-term treatment programmes
Diagnosis
IBM’s Watson for Health is helping healthcare organisations apply cognitive technology to unlock vast amounts of health data and power diagnosis. Google’s DeepMind Health is working in partnership with clinicians, researchers and patients to solve real-world healthcare problems
Decision making
Using pattern recognition to identify patients at risk of developing a condition—or seeing it deteriorate due to lifestyle, environmental, genomic, or other factors—is another area where AI is beginning to take hold in healthcare
Source: PwC