As the ministry of electronics and information technology (MeitY) is working on amendments to the IT Rules, 2021 to tackle deepfakes, global software companies such as Microsoft, IBM, Adobe, Amazon Web Services (AWS), Zoom, SAP, among others, have urged that regulations for B2B enterprises and social media firms should be different.
For instance, business-to-business and enterprise software services providers pose limited risk to user safety and public order, given the size of their user base. Further, unlike social media firms like X, Facebook, or Instagram they do not provide services directly to consumers.
The enterprise firms, which through their software alliance group BSA, have written to the minister of state for electronics and IT Rajeev Chandrasekhar, have said that a one-size-fits-all approach should be avoided.
“The MeitY should consider the differences in the role and function of intermediaries when prescribing obligations related to the spread of deepfakes,” Venkatesh Krishnamoorthy, country manager – India at BSA, said in the letter.
“This is crucial due to key service-level, technical, functional, and user-based distinctions that ensure that all intermediaries do not have the same ability to address this issue,” Krishnamoorthy added.
Among alternative solutions to address the issue of deepfakes, BSA said the government should encourage the use of watermarks or other disclosure methods for AI-generated content that can help users to tell whether content is real or generated by AI. This can be helpful in preventing misinformation.
Further, the software alliance said an open-source standard developed by the Coalition for Content Provenance and Authenticity, generates tamper-evident content credentials for content authenticity and provenance. “This standard will help consumers decide what content is trustworthy and promote transparency around the use of AI,” it said in the letter.
Experts said the platforms should preserve the content credentials or watermarks or metadata. The same will ensure that the public can see it wherever they are consuming online content.
While pitching for responsible use of artificial intelligence (AI), currently, the government is initially looking at regulations to curb the spread of deepfakes via social media platforms like X, Facebook, Instagram, etc. Larger regulations at the technology level, would be addressed in the upcoming Digital India Act.
Last year, the government issued multiple advisories to the social media companies to take down content related to deepfakes and misinformation from their platforms.
As per IT rules, the companies are mandated to remove such content within 36 hours upon receiving a report from either a user or government authority. Failure to comply with this requirement invokes Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC). This could also make the online platforms liable to lose safe harbour protection under Section 79(1) of the Information Technology Act, 2000.
In December, MeitY had also asked the platforms to send regular reminders to their users to not upload, transmit, and host prohibited contents. The companies were asked to inform the users about such contents at the time of first-registration, as regular reminders, at every instance of login, and while uploading/sharing information onto the platform.
“For those who find themselves impacted by deepfakes, I strongly encourage you to file First Information Reports (FIRs) at your nearest police station and avail the remedies provided under the Information Technology (IT) rules, 2021,” Chandrasekhar had said in November last year, in a statement.

