Government nod for AI models only for social media platforms

Not applicable to areas like healthcare, agriculture, says IT minister

AI models will be applicable to only social media platforms
AI models will be applicable to only social media platforms

The government’s advisory to intermediaries for seeking its permission before launching AI models will be applicable to only social media platforms and not to platforms working in the healthcare or agriculture sector, communications and IT minister Ashwini Vaishnaw said on Monday.

The minister’s comments came after startups expressed their displeasure with this kind of screening of large language models, terming the move as regressive and something which will throttle innovation.

However, Vaishnaw defended the move stating that proper training of such models was important to ensure safety of citizens and democracy.

“Whether an AI model has been tested or not, proper training has happened or not, is important to ensure for the safety of citizens and democracy. That’s why the advisory has been brought,” he said. “Some people came and said sorry we didn’t test the model enough. That is not right. Social media platforms have to take responsibility of what they are doing,” the minister added.

Vaishnaw also clarified that the government’s missive was not a regulatory framework but an advisory to test the model before launching.

Earlier in the day, minister of state for electronics and IT, Rajeev Chandrasekhar also clarified that the  advisory will only be applicable to large platforms and not to startups. “Process of seeking permission, labelling and consent based disclosure to users about untested platforms is an insurance policy to platforms who can otherwise be sued by consumers,” Chandrasekhar said.

The developments followed after founders of startups expressed grave concerns over the move.

Aravind Srinivas, CEO of Perplexity AI termed the advisory as “bad move by India”.

Similarly, Pratik Desai, founder of KissanAI, which has built a large language model (LLM) for agriculture – Dhenu said, “I was such a fool thinking I will work bringing GenAI to Indian agriculture from San Francisco. We were training a multi-modal low cost pest and disease model, and was so excited about it. This is terrible and demotivating after working four years full time bringing AI to this domain in India,” Desai said.

Bindu Reddy, CEO of Abacus AI said, “every company deploying a GenAI model now requires approval from the Indian government! That is, you now need approval for merely deploying a 7b open source model. If you know the Indian government, you know this will be a huge drag! All forms will need to be completed in triplicate and there will be a dozen hoops to jump through! This is how monopolies thrive, countries decay and consumers suffer”.

As reported earlier, the government on Saturday issued an advisory to all intermediaries and generative AI platforms using artificial intelligence models, software or algorithms to seek permission from it and label their platforms as ‘under testing’ before making them available to the public.

After getting necessary approvals and labelling, the platforms — which include the likes of Google’s Gemini, ChatGPT and Krutrim AI — will then have to seek user consent, clearly stating that the generative AI model or search platforms they are using could give incorrect information and could be error-prone.

The advisory was issued after experimental models by generative AI platforms, in recent days reported several instances of biased content and misinformation.

The companies, especially digital publishing platforms, have also been asked to figure out a way to embed metadata or unique identification code for everything that is synthetically created on their platforms. This will help track the originator of such information.

After the platforms submit their application to the ministry of electronics and IT( MeitY),  officials might ask for a demo of the model, conduct any necessary tests and evaluate consent-seeking mechanism, etc.

In the advisory, the government reiterated that non-compliance of the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users, when identified, including but not limited to prosecution under the IT Act and several other statues of the criminal code.

Such instances of bias or misinformation in the content generated through algorithms, search engines or AI models on platforms violate Rule 3 (1) (b) of the IT Rules and several provisions of the criminal code. On this basis, the platforms are also not entitled to protection under the safe harbour clause of Section 79 of the IT Act.

Follow us on TwitterFacebookLinkedIn

This article was first uploaded on March five, twenty twenty-four, at thirty minutes past nine in the morning.

/