Google’s work with deep-learning for diagnostics offers a glimpse into how AI can revolutionise healthcare
Google’s deep-learning research, involving a large dataset of retinal images used in diagnosis of diabetic retinopathy (DR), a diabetes-linked pathology that causes irreversible blindness, shows the transformative power of artificial intelligence (AI) in healthcare. In 2016, the tech giant announced its deep-learning algorithm that had been trained using a dataset of 128,000 images—each of which had been reviewed by 3-7 expert ophthalmologists from a panel of 54—to accurately interpret underlying symptoms (microaneurysms, haemorrhages, hard exudates, etc) from fundus images (a specific type of imaging of the eye) and detect referable DR. Given the pathology affects 18% of the 70 million diabetics in India—and with 415 million diabetics worldwide, is now the fastest growing cause of blindness—Google’s algorithm vastly improves the prospects of DR being screened by doctors faster, and in greater numbers than is possible in an unassisted scenario. For countries strained for resources and healthcare infrastructure, this is truly a manna since diagnosis in the early stages can prevent/delay onset of blindness. The algorithm’s performance was tested with ~12,000 images, with the majority opinion of panels of expert ophthalmologists drawn from Google’s pool of 54 on each of these images set as the reference standard. The panel members had been selected on the basis of high consistency of accurate diagnosis. The algorithm’s performance on diagnosing the disease and its severity, in terms of combined sensitivity and specificity matrix, was 0.95 (the highest score possible being 1), slightly above the median score of 0.91 for the ophthalmologists who were part of the tests. Google has since been working with retinal specialists to build even more robust reference standards, including focussing on 3D retinal images. It is running field trials for AI-assisted diabetic retinopathy screening in Sankara Nethralaya and Aravind Eye Hospital in India. Further, research by the company showed that its algorithm is capable of aiding doctors in detecting cases that they would have otherwise missed; what makes this machine-human collaboration even more exciting is the fact that the company’s researchers found that the highest accuracy was recorded when the algorithm complemented the skills of the doctor rather than an algo-alone or doctor-alone scenario.
Watch Video: How To File ITR-1 for AY 2019-20 in less than 15 minutes
Last year, the company showed how retinal images used in DR diagnosis can be used to detect risk of cardiovascular (CV) disease. Using images from nearly 300,000 DR patients, Google was able to train another deep-learning algorithm to ascertain risk of an adverse CV event with high accuracy for two data-sets of 12,026 and 999 patients. Not only was the algorithm able to distinguish between the retinal images of a smoker and a non-smoker while doctors can make out the difference between retinal images of patients with high blood pressure and those having normal blood pressure, it was also able to predict the blood pressure in each case with a very narrow margin of error. The trained model correctly predicted a patient’s risk of cardiovascular disease over five years, from the image having been developed, 70% of the time. This was close to the performance of other traditional risk-calculation protocols with invasive tests (involving blood drawing, etc). The Google deep learning system, thus, not only complements the skills of healthcare professionals, but also pushes the boundaries of what healthcare can achieve in real time. If this is the potential from just one tech company’s research, imagine the gains from the gamut of AI work happening at the moment, and not just in healthcare.