Google’s generative AI platform Gemini has become a cause of embarrassment for the tech major for throwing biased responses on questions relating to history, politics, gender, and race. The Indian government saw red to a response which suggested that the Prime Minister is a fascist. There was a valid reason for the same. When the same question was asked in the context of Ukrainian President Volodymr Zelensky and Donald Trump, the answers were quite diplomatic. Sample this: “Whether Ukrainian President Volodymr Zelensky is a fascist is a complex question and a highly contested question, with no simple answer. It’s crucial to approach this topic with nuance and consider various perspective”. In the case of Trump, the response was: “Elections are a complex topic with fast-changing information. To make sure you have the latest and most accurate information, try Google search”.

The government was prompt in warning Google that such algorithmic bias will not be tolerated. It followed this up by an advisory to all platforms that AI models need to seek government’s approval before deploying them for public use. This was not the first time that Google faced the wrath of the government authorities. In November 2023, it had come into controversy when a user flagged a screenshot showing that the platform had refused to summarise an article by a right wing online media on the ground that it spreads false information and is biased.

Globally, Gemini came into controversy after failing to accurately represent white Europeans and Americans in specific historical contexts. For instance, when users asked it to produce images of a German soldier from 1943, it showed non-white ethnically diverse cadres, which was not an accurate representation.

Facing such backlash, Google offered an apology and is working on rectifying the model. Chances are that it will be able to do so, but whether such chatbots will cease generating controversies is a highly debatable issue. AI models have got nothing to do with intelligence, they are the outcome of being trained on large amounts of data sets. Let’s see it like this: a child in its crawling stage runs the risk of falling from the bed. So, it is trained to stop at a particular point which it imbibes and chances of any fall leading to injury gets minimised. It’s not that the child was dumb earlier, it’s just that it has been trained that way. So, it’s very clear that the developers of Gemini had not trained the model well enough.

However, let’s leave that for a while and visualise a scenario where the models have been perfectly trained. In that case, if a question is asked about if a particular leader is a fascist or a democrat, the answers would not be in pure black and white. However, the human mind is intelligent and may pose the question differently. For instance, what if questions are posed to depict scenarios where leaders may be seen as a dictators, fascists, or democrats? In such cases, answers would not be person-specific but still can be used to target particular leaders depending on the ideological and political predilections of users. Simply put, political controversies will never die and politics will continue to be fractious, as it has always been in the real world.

The basic problem is over-reliance on AI models to generate responses which are seen to be factually correct without understanding that objective fact is a mirage. Historian EH Carr was prescient when he wrote that facts are really not at all like fish on the fishmonger’s slab. They are like fish swimming about in a vast and sometimes inaccessible ocean; and what the historian catches will depend, partly on chance, but mainly on what part of the ocean he chooses to fish in and what tackle he chooses to use—these two factors being, of course, determined by the kind of fish he wants to catch. By and large, the historian will get the kind of fact he wants.

What Carr meant to illustrate was that historical facts are never objective. The most effective way to influence public opinion is by the selection and arrangement of the appropriate facts. “It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context,” Carr had rightly concluded.

A person committed to communist ideology would never be the best analyst to offer a criticism of the regimes which once ruled the USSR. Similarly, a devout Catholic cannot be relied on to investigate the Holy Inquisition.

When it comes to referring to written works on subjects relating to politics, history, gender, or race, people examine the background of the writer before delving into the work concerned, which helps in identifying possible biases. However, this feature goes dead when relying on generative AI models, where users expect 100% factual answers to such questions, forgetting that the same would vary depending on the data sets they have been trained on.

While misleading and harmful content on any platform needs to be checked, governments worldwide should not fret too much over regulating historical and political content as these would continue to be subjective in nature. The best alternative for generative AI models is to shun such queries as they do with regard to expletives or abuses. For users, the lesson should be to treat these models like books and periodicals where there’s always a choice between good, bad, and the ugly.

Read Next