By Ravi Singh
During a recent Google event, CEO Sundar Pichai introduced a new ground-breaking algorithm that could detect cardiovascular conditions by using a single retina scan. Similar path-breaking AI algorithms are finding application in areas as diverse as autonomous vehicles, healthcare, agriculture and the criminal justice system. These algorithms operate on the bedrock of Big Data, vast amounts of information used for their training.
The proponents of AI argue that these algorithms also help in mitigating cognitive bias which a human may have vis-a-vis a computer programme. For example, if doctors use a predictive AI tool to scan X-ray reports or CT scans, the results can be very objective, free from the biases a doctor may have. Similarly, imagine a scenario where an AI tool assists the judicial system in rendering judgments for offenses like jumping red lights or speeding. By analysing massive data sets, these judgments could be objective, based on hard evidence, and leave minimal room for subjectivity.
This could expedite case disposition while reducing personal biases. Furthermore, AI-powere autonomous vehicles could ensure zero accidents, leading to enhanced safety on the roads.
The current generation of AI algorithms are based on the idea of deep reinforcement learning. This basically means, unlike previous generation of algorithms, we don’t need to teach the AI specific strategies to accomplish a task. Instead, the AI system is only fed with basic rules and a historical data set and it learns on its own the most optimal strategy to achieve the desired outcome. For example, when Google’s DeepMind AlphaGo beat world’s number one Go (a complex strategy board game) player in 2017, it was not trained specifically on particular strategies fed by human experts unlike his IBM predecessor Deep Blue which beat Gary Kasparov in chess in 1997.
DeepMind had studied older matches, as training data set and played thousands of games against itself to learn the best strategies. Since DeepMind used a completely general technique, the same AI was able to learn to play 49 other games.
While these advancements are impressive, they bring forth a significant concern. The current generation of AI algorithms including the likes of ChatGPT work like a black box. While they may produce desired outcomes, they cannot explain how they arrived at them. Even their creators are unable to discern the exact decision-making process hidden within the complex operations of neural networks. For instance, a facial recognition software may recognise your face correctly but it might not be possible to know how exactly it did that. This opacity has practical implications.
An AI system may be in a position to conclude that a person has a propensity to get a stroke based on their scans without disclosing the exact process behind its conclusion. Similarly, an AI-based autonomous vehicle may choose to collide with a person to prevent an accident with a truck, leaving the logic behind such decisions obscure. An AI system may not be free of bias also. Facial recognition systems have found to have the poorest accuracy in the case of subjects who are females, black and 18-30 years old. In another instance, Amazon discontinued the use of a hiring algorithm when it was discovered that the algorithm exhibited bias towards applicants using words like “executed” or “captured” that were more prevalent in men’s resumes. Bias can creep into algorithms as AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequitie
Ironically, blind trust in data-driven decisions facilitated by AI algorithms may lead individuals to unquestioningly rely on them. This may relegate humans to a set of data points and undermine core human values. Yuval Noah Harari talks about Dataism in his book, Homo Deus. The proponents of Dataism view the universe as a flow of data and organisms as biochemical algorithms. Dataism can de-humanise people and see them and their activities as mere statistics.
In ancient times, our ancestors performed rituals to appease nature gods, hoping for timely rains, abundant crops, and prosperity. They had an unwavering belief that their fates were intimately tied with the gods of nature. However, there was no way to know how exactly the gods took decisions. Sometimes, the rituals worked, sometimes, they didn’t. A data-centric world may put us in a similar scenario. An AI algorithm may take important decisions on our behalf based on extensive data. We might remain unaware of the precise reasoning behind the decision but would be compelled to accept it based on its perceived objectivity and fairness.
The information age has made our lives more predictable and better. Big Data continues to play a constructive role in improving the quality of life various domains such as healthcare, transportation, science & technology, etc. However, we also need to have informed discussions on the implications of a data-centric future.
The author is an IRS officer
Views are personal