While celebrating artificial intelligence (AI) and the immense scope it offers in the interface between the world of technology and consumers, the government, social media firms, as well as the citizens should not lose sight of the harms it can bring if regulatory mechanisms are not put in place in a time-bound manner. The surfacing of a deep fake video of two Bollywood actors on social media platforms has rung the warning bells. Such fake videos, which are termed deepfakes, are worrisome because they impersonate the voice and face of a real person in such a manner that it’s very difficult to spot that these are fake. In these times when profiles of almost every individual are present on social media sites, creating deepfakes can be as easy as surfing the Internet.
Though the government did its bit by issuing advisories to the social media firms that any misleading, fake or harmful content should be removed by the intermediaries within 36 hours of being flagged, this is more a reactive solution than preventive. It is true that any law governing AI and deepfakes cannot emerge overnight. So, in the interim, social media firms need to be more vigilant and proactive in taking down such content once the same has been flagged. The platforms have been found wanting in this regard, may be due to some grey areas in the interpretation of the information technology laws that govern intermediaries. The general understanding is that platforms are supposed to remove objectionable content within 24-36 hours of being flagged either by an aggrieved party or the government. In the current case, no such complaint was formally lodged by the actresses. However, that the video was fake was pointed out by several notable personalities, and their posts were duly acknowledged by one of the actors. This should have been ground enough for the platforms to immediately remove the video.
Laxity on the part of the platforms perhaps made the government remind them that they are required to act even in the absence of a formal complaint if the matter has come to light. The government did the right thing by reminding them that any failure on their part may lead to them losing their safe harbour protection under Section 79(1) of the Information Technology Act, 2000. Still, a larger responsibility falls on the government to put in place specific AI regulatory mechanisms to check such incidents and misuse of technology. Minister of state of electronics and IT, Rajeev Chandrasekhar has been rightly pointing out the need for an omnibus Digital India Act which will replace the IT Act which is 23 years old, when there wasn’t even a concept of an intermediary. The government had to insert a set of new rules as late as 2021 to regulate such entities.
However, the Digital India Bill should be drafted fast or else it will be a case of too little, too late. Further, the rules of the Digital Personal Data Protection law should also be put in place soon. A law without the necessary rules in place is as good as having no law. Another area that needs clarification is whether the DPDP law also covers AI. It is understood that scraping the Internet to dig out personal details of an individual will also be considered an offence under the law. The government needs to clarify and act fast.