Following Friday’s warning by the Indian government over a biased response by its generative AI tool Gemini, Google on Saturday said Gemini may not be reliable to some prompts related to current events, political topics, or evolving news.
However, minister of state for electronics and IT Rajeev Chandrasekhar posted on X that “sorry unreliable” does not exempt it from the law. “Our Digital Nagriks are NOT to be experimented on with “unreliable” platforms/algos/model,” Chandrasekhar posted, adding, “Safety & Trust is platforms legal obligation”.
The government is also expected to issue a show-cause notice to Google over the matter, officials said.
Google’s explanation on Saturday stated: “We’ve worked quickly to address this issue. Gemini is built as a creativity and productivity tool and may not always be reliable… This is something that we’re constantly working on improving.”
On Friday, the government had issued a warning to Google — the second in the past four months — over the bias shown by Gemini. The warning stated that such instances of bias in the content generated through algorithms, search engines or AI models on platforms violate Rule 3 (1) (b) of the IT Rules and several provisions of the criminal code. On this basis, the platforms are also not entitled to protection under the safe harbour clause of Section 79 of the IT Act.
The recent case pertains to Gemini’s response to different prompts on whether Prime Minister Narendra Modi, Donald Trump and Ukraine’s Volodymyr Zelenskyy were fascists. As per screenshots shared by a user on X, Gemini’s responses were tilted towards Modi being a fascist, whereas it did not give any related response for Trump, but said “elections are a complex topic with fast changing information”. For Zelenskyy, it gave a limited answer, as per the screenshot.
Google’s statement added that Gemini is built in line with its AI principles, and has safeguards to anticipate and test for a wide range of safety risks. The company prioritises identifying and preventing harmful or policy-violating responses from Gemini, it said, adding, the company also offers users ways to verify information with its double-check feature, which evaluates whether there is content on the web to substantiate Gemini’s responses.
Lately, the government had also advised platforms, especially generative AI platforms like Open AI and Google Gemini, not to publicly release any experimental variants just by putting a disclaimer.