AI advisory: Govt removes requirement to take its permission for untested models

The platforms, however, have been advised to label their AI models and softwares as ‘under testing’ before making them available to the public. They have been also advised to follow a user consent mechanism, to inform the users about the possible erroneous outcome the generative AI model can generate.

ai, Hanooman
As far as the collaboration with the Government of Telangana is concerned, the partnership is to facilitate translation between English and Telugu, enhancing accessibility and understanding of crucial documents such as court orders. (Image: Reuters)

In a fresh advisory to platforms on Friday, the ministry of electronics and IT (MeitY) has removed the earlier requirement of seeking government’s nod before launching any untested or unreliable AI models in the country.

The platforms, however, have been advised to label their AI models and softwares as ‘under testing’ before making them available to the public. They have been also advised to follow a user consent mechanism, to inform the users about the possible erroneous outcome the generative AI model can generate.

On March 1, MeitY issued an advisory to all intermediaries using artificial intelligence (AI) models, software or algorithms and asked them to seek permission from the government and label their platforms as ‘under testing’ before making them available to the public.

In the revised advisory, besides removing the requirement to seek the government’s nod for launching models, there has not been much change apart from the language, which has been toned down. A copy of the revised advisory was seen by FE.

The revised advisory from the government comes after many experts and AI companies criticised the earlier advisory. Startups expressed their displeasure with this kind of screening of large language models, terming the move as regressive and something that will throttle innovation. The advisory was issued after experimental models by generative AI platforms, in recent days, reported several instances of biased content and misinformation.

The intermediaries have also been asked to figure out a way to embed metadata or unique identification code for everything that is synthetically created on their platforms. Through this, the government aims to identify the originator of misinformation and deepfakes.

“Further, in case any changes are made by a user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change,” the revised advisory said.

Earlier, the companies were asked to submit an action-taken report within 15 days, but the revised advisory has removed such requirement and the platforms have been asked to comply that “with immediate effect”.

“Whether an AI model has been tested or not, proper training has happened or not, is important to ensure for the safety of citizens and democracy. That’s why the advisory has been brought,” communications and IT minister Ashwini Vaishnaw had said earlier this month. 

“Some people came and said sorry we didn’t test the model enough. That is not right. Social media platforms have to take responsibility for what they are doing,” the minister added.

Rajeev Chandrasekhar, minister of state for electronics and IT, had also clarified that the advisory was only be applicable to large platforms and not to startups. “The process of seeking permission, labelling and consent-based disclosure to users about untested platforms is an insurance policy to platforms who can otherwise be sued by consumers,” Chandrasekhar had said.

In the advisory, the government reiterated that non-compliance of the provisions of the IT Act and/or IT rules would result in consequences included but not limited prosecution under the IT Act 2000 and other criminal laws, for intermediaries, platforms and their users.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on March sixteen, twenty twenty-four, at seven minutes past one in the night.