Take down deep fakes: Govt to social media firms

Says such content should be removed even in absence of formal complaint

Social media platforms have transcended their original purpose of connecting people and sharing personal updates
“Overnight a law cannot come up to deal with emerging technologies. Our existing laws provides for enough protection to deal with problems like deep fakes,” a government official said.  

The government on Tuesday made it clear to social media firms like X, Facebook, Instagram, etc that pending a new Digital India Act, they should take down content related to deep fakes and similar misinformation from their platforms even in the absence of a formal complaint, under the provisions of Information Technology Act.

It also warned the firms concerned that failure on their part to act under the provisions, will attract punishment under Section 66D of the IT Act, 2000, which involves punishment for cheating by personation by using computer resources and imprisonment of up to three years and a fine of up to Rs 1 lakh.

“Overnight a law cannot come up to deal with emerging technologies. Our existing laws provides for enough protection to deal with problems like deep fakes,” a government official said.  

The context of issuing the advisory and stressing on the provisions of law which mandates to act even in the absence of a formal complaint by the aggrieved party, is the deep fake video of actress Rashmika Mandanna which has surfaced over social media platforms. Though the actor has not lodged a formal complaint with the authorities, several notable personalities have pointed out that it’s an impersonated video, which has been acknowledged by the actor on the platforms concerned.  

Meity has pointed out that as per intermediary rules under the IT Act, social media platforms are supposed to remove misleading content within 36 hours of receiving a report from either a user or government authority. Failure to comply with this requirement invokes Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC). This could also make the online platforms liable to lose safe harbour protection under Section 79(1) of the Information Technology Act, 2000.

“Deep fakes are a major violation and harm women in particular. Our government takes the responsibility of safety & trust of all nagriks very very seriously, and more so about our children and women who are targeted by such content,” said Rajeev Chandrasekhar, minister of state for electronics and IT.

“For those who find themselves impacted by deep fakes, I strongly encourage you to file First Information Reports (FIRs) at your nearest police station and avail the remedies provided under the Information Technology (IT) rules, 2021,” the minister added.

Currently, India does not have any specific regulatory framework for AI. The government is working on a Digital India Bill, to address the challenges posed by the emerging technologies from the prism of user harm.

While acknowledging that the provisions of IT Act can be used to deal with deep fakes and other AI-related crimes, experts have called for a collective approach from the government and industry to deal with such issues.

“I don’t think calling for an AI law immediately is an answer to resolve the challenges posed by emerging technologies. The focus should be on having responsible AI principles and also training AI algorithms that can help identify suspicious patterns including voice and design in images or videos like deep fakes, as well fabricated content,” said Jameela Sahiba, senior programme manager – emerging tech at The Dialogue.

According to Sahiba, besides awareness in terms of flagging misinformation and impersonated content, there is a need for social media platforms to have more intelligent crawling tools.

According to industry executives, the current AI based tools that are being used to create deep fakes are far more advanced and sophisticated, and it is difficult for common human eye to distinguish deep fakes from original images.

“We are still experiencing primitive versions of AI tools, and its full potential is yet to unfold. There is certainly a need for regulations governing use of AI, not only from the standpoint of mitigating user harm and ensuring safety on Internet, but also to lay a foundation for its overall commercial use across territories,” said Tanu Banerjee, partner Khaitan and Co.

The compliance report of social media platforms in accordance with IT Rules, 2021, suggests that they are taking action on content removal a lot more by themselves. “A massive level of content generation makes it difficult for platforms to track every illegal content harming users on their platforms. What they can do is invest more in AI research and development and science and technology to tackle misinformation and deep fakes,” said Shruti Shreya, senior programme manager – online safety and platform regulation at The Dialogue.

Since AI is still an emerging technology, globally too the countries are yet to catch up with its rapid pace of development and adoption. In some countries like South Korea, currently deep fake is illegal. The EU Artificial Intelligence Act also has received mixed response.

“While AI tools can be misused for potentially harmful activity such as creating deep fakes and propagating misinformation among others, AI in general is a force for good and has the potential to bring about a net positive effect on society. It is to be seen how the government intends to achieve this balance through the Digital India Act which is on the horizon,” said Namita Viswanath, partner at IndusLaw.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on November eight, twenty twenty-three, at fifteen minutes past six in the morning.
Market Data
Market Data