The ministry of electronics and information technology (MeitY) on Friday issued a warning to Google for a biased response generated by its AI platform Gemini on Prime Minister Narendra Modi.
This is a second warning issued by the government to Google in the last four months. The warning stated that such instances of bias in the content generated through algorithms, search engines or AI models of platforms violates Rule 3 (1) (b) of the IT Rules and several provisions of criminal code. On this basis, the platforms will not be entitled to protection under the safe harbour clause of Section 79 of the IT Act.
The recent case pertains to Gemini’s response to different prompts on whether Modi, Donald Trump and Ukraine’s Volodymyr Zelenskyy were fascists. As per the screenshots shared by a user on X, Gemini’s responses were tilted towards Modi being a fascist, whereas for Trump, it did not give any related response but said “elections are a complex-topic with fast changing information”. For Zelenskyy, it gave a limited answer, as per the screenshot.
“These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code,” minister of state for electronics and IT Rajeev Chandrasekhar said in response to a complaint by the user on X.
Chandrasekhar tagged MeitY and Google for further action. The government is also expected to issue a showcause notice to Google over the matter, officials said.
When FE ran a similar query on Gemini, both for Modi and Zelenskyy, the platform in one of the draft responses gave arguments both in favour and against. For Trump, it did not give any response related to fascism, but gave a general statement on elections.
In November, Google’s Bard (now Gemini) caught the attention of the government, when a user flagged a screenshot, in which Bard refused to summarise an article by a right wing online media on the ground that it spreads false information and is biased.
Lately, the government also advised the platforms especially generative AI platforms like Open AI and Google Gemini not to release to the public any experimental variants, just by putting a disclaimer.
Platforms like ChatGPT and Gemini currently put a disclaimer that their generative AI platform can display inaccurate info, including about people, so users should double-check its responses.
Officials have said that instead of releasing experimental stuff to the public with disclaimers, these platforms should first run experiments on certain specific users in a sandbox kind of an environment, which will be approved by some government agency or regulator.
MeitY has been working on an omnibus Digital India Act to address such emerging issues, but has said in the interim the Information Technology Act and other similar laws will apply in all cases of user harm, which includes deepfakes.
The government is also soon expected to amend the IT Rules, and likely additions are watermarking and labelling the details such as source, creator, etc, of the information generated by the generative AI platforms.
On Thursday, Google also announced that it is pausing its Gemini AI image generation feature after there were complaints regarding “inaccuracies” in historical pictures, generated by the model.
“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” Google said in a statement.
With regard to AI regulations, companies like Google are in favour of a risk-based approach based on the usecase of technology, instead of uniform rules for all AI applications. “I think, fundamentally, you have to ask yourself, what kind of bias you are concerned about? There are already laws in place that say certain types of biases are not allowed. So, that is why we are pushing for a risk-based approach, proportionate to a particular use case,” Pandu Nayak, vice president of Search at Google, had told FE in an interaction in December.