By S Ramadorai & Arvind Singh

Continue reading this story with Financial Express premium subscription
Already a subscriber? Sign in

In our previous article, we explored how artificial intelligence (AI) can transform the delivery of healthcare (bit.ly/3NAVXtf). Since then, much has happened in the world of AI. Some of the predictions imply that AI can become so capable that it can replace the doctor in the delivery of medical treatment. The majority of such predictions have been by people who are not qualified in medicine. Anyone who is not professionally involved in the field of medicine tends to hold a perspective that healthcare is about making a diagnosis based on the patients’ complaints and prescribing the appropriate medication/procedure. In many countries, healthcare has become a huge profit-driven industry, making the whole experience very transactional. However, the delivery of healthcare services is much more than prescribing a remedial measure.
The effectiveness of healthcare is rooted in human connection and it must be considered as a process of “healing”. We must take into account the different dimensions of the patient-provider relationship, which form an integral part of a patient’s well-being and improvement in health outcomes. A doctor can be seen as a healer who first makes a diagnosis based on the patient’s history, signs and symptoms. All medical students are taught that symptoms are what the patient describes to be troublesome to the doctor, while signs are what the doctor elicits after a thorough examination of the patient. Headache is a symptom, but raised intracranial pressure is a sign. This, when combined with a further diagnostic investigation such as a blood test or a CT scan, completes the diagnostic process. Once the diagnosis has been confirmed by investigations, appropriate treatment options are prescribed. The clinician then explains the medical diagnosis and treatment options in a way that the patient can understand and believe in to actively participate in the care process. Therefore, it is imperative for a good clinician to have excellent diagnostic skills, knowledge, experience, and compassion.

The use of AI in healthcare has the capability to streamline processes, automate tasks, and provide quick access to information, minimising the scope for human intervention. This can improve efficiency, save time, and enable data-driven decision making. However, the reduced amount of direct human interaction may lead to patients feeling depersonalised, isolated, or as mere data points in the healthcare system.

The lack of emotional support, intuition, and human touch that are integral to a patient’s betterment can have a myriad of negative implications. When patients seek medical treatment, they often experience fear, vulnerability, and uncertainty. In such situations, the presence of a compassionate healthcare professional who can provide comfort, reassurance and personalised care is crucial to the overall healing process.

Moreover, the human connection goes beyond the emotional aspect. Healthcare providers possess tacit knowledge gained through years of experience, clinical judgment, and intuition. They can perceive subtle cues, nuances, and non-verbal communication that may not be fully captured or understood by AI algorithms.

The use of AI in airplane control presents a distinct comparison. Let us take the example of the recent crashes of Boeing 737 MAX. Soon after take-off, the AI-based MCAS system detected a discrepancy between the two angle of attack sensors and concluded that the aircraft was about to stall. The standard manoeuvre to deal with a stall is to lower the nose and increase power, and this is what the system did. The pilot, on seeing the nose dropping, tried to rectify it by reducing power and pulling up the nose. This worked for a short time, before the action was repeated by the MCAS system. Until the aircraft crashed, the system continued to override the pilot’s inputs without informing the pilot in a clear enough way why this action was being taken. This is also an aspect of AI-based systems that is becoming more obvious in the recent years to the concern of AI specialists, a phenomenon known as “black box thinking”. The term “black box” refers to the non-transparent way in which the algorithm functions when it overrides the instructions of the human decision-maker. This level of automation could wreak havoc on the healthcare system.

In medicine, we use a lot of technology, but the ultimate control is with a human caregiver. Let us take the example of the blood pressure monitoring systems that are routinely used by anaesthetists during surgery. When the blood pressure rises or falls below a certain predefined value, the device sets off an alarm, alerting and prompting the anaesthetist to take action to bring the blood pressure within acceptable limits. Unlike the MCAS system in the 737 MAX, in this context, the AI system does not attempt to correct the situation or override the action of the anaesthetist, as there are situations where the action initiating a change in blood pressure may have been intentional on the part of the surgeon. Therefore, all AI systems in healthcare must, in our view, be advisory rather than executive, so that the healthcare professional is made aware of the situation and is left to take corrective action.

To mitigate the potential disruption of the human connection, healthcare systems should prioritise the integration of AI technologies in a way that enhances rather than replaces the patient-provider relationship. This can be achieved by incorporating AI as an advisory tool that can assist caregivers in their decision-making process, enabling them to focus more on patient care, communication, and relationship building.

The potential transformative power of AI is immense. AI technologies can analyse vast amounts of medical data quickly and accurately, enabling healthcare professionals to make more informed decisions. They can assist in the early detection of diseases such as cancer and cardiovascular conditions, leading to timely interventions and improved prognosis. AI-driven systems can also reduce human error significantly. Hence, healthcare organisations must invest in training healthcare professionals to effectively utilise AI technologies.

AI can bring about significant advancements in healthcare, but it is critical to recognise and address the concerns regarding the disruption of the human connection. By prioritising the preservation of the human interaction, empathy, the trust element, and personalised care, healthcare systems can leverage the benefits of AI while ensuring that patients continue to receive the holistic, compassionate, and human-centred care they deserve.

The authors are Respectively, former vice-chairman, TCS, & eye surgeon and consultant, devices and processes