Govt nod, user consent must for GenAI models

The companies, especially digital publishing platforms, have also been asked to figure out a way to embed metadata or unique identification code for everything that is synthetically created on their platforms.

AI would add trillion to the global economy
AI would add $4 trillion to the global economy

The government on Saturday issued an advisory to all intermediaries and generative AI platforms using artificial intelligence (AI) models, software or algorithms to seek permission from the government and label their platforms as “under testing”  before making them available to the public.

The platforms — including Google’s Gemini, ChatGPT and Krutrim AI — will then have to seek user consent, clearly stating that the GenAI model or platform could give incorrect information and be error-prone.

“In a lot of ways, this advisory signals the framework of the future of our regulatory and legislative framework that aims at creating a safe and trusted internet,” Rajeev Chandrasekhar, minister of state for electronics and IT, said.

The advisory follows use of experimental models by generative AI platforms, which have reported several instances of biased content and misinformation.

The companies, especially digital publishing platforms, have also been asked to figure out a way to embed metadata or unique identification code for everything that is synthetically created on their platforms. This will help track the originator of such information.

“Nobody can escape accountability from what is unlawful,” Chandrasekhar said, adding that the services of these platforms should not generate responses that are illegal under Indian laws and threaten the integrity of the electoral process.

“All intermediaries or platforms ensure that their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process, including via the use of artificial intelligence models/LLM/generative AI, software or algorithms,” the advisory said.

The companies have been asked to submit an action-taken report within 15 days.

After the platforms submit their application to the ministry of electronics and IT( MeitY),  officials might ask for a demo of the model, conduct any necessary tests and evaluate consent-seeking mechanism,  among other measures.

Chandrasekhar said the “sorry unreliable” kind of responses are not acceptable and user consent needs to be sought with proper disclaimers; that will become a sandbox-like environment for these platforms.

“The most important point is that the liability today exists for them (intermediaries). This (the compliance with advisory) is only making it easier for them to make sure that the liability can be in a sense carved out because they have disclosed it and it is with consent of the person that they are offering this unlawful content so that they have a real legitimate defence,” Chandrasekhar added.

In the advisory, the government reiterated that non-compliance of  the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users, when identified, including but not limited to prosecution under the IT Act and several other statues of the criminal code.

Such instances of bias or misinformation in the content generated through algorithms, search engines or AI models on platforms violate Rule 3 (1) (b) of the IT Rules and several provisions of the criminal code. On this basis, the platforms are also not entitled to protection under the safe harbour clause of Section 79 of the IT Act.

Read Next
This article was first uploaded on March two, twenty twenty-four, at fifty-three minutes past five in the evening.

/

X