By Poulomi Chatterjee
The government has directed X to remove all Grok AI-generated objectionable pictures of women prompted by users. But X’s identity as a social media platform that has morphed into an AI platform can make regulation unwieldy, requiring a relook at the safeguards, explains Poulomi Chatterjee
What led to the issue with Grok?
Grok, the AI chatbot from Elon Musk’s AI company xAI, rolled out a feature on December 29 that allowed users to edit any image without asking for permission. As users thronged to generate non-consensual highly sexualised images of women, celebrities and children, a Reuters survey found that over a 10-minute period on January 2, X users registered 102 attempts to use Grox to edit photos of people, most of them young women. The only guardrails Grok had were seemingly to restrict users from completely “nudifying” people.
How did the govt respond?
That day itself, the ministry of electronics and information technology (MeitY) issued an advisory to all digital intermediaries including social media platforms “prohibiting hosting, uploading, sharing or trans-mission of obscene, pornographic, vulgar, indecent, sexually explicit, or pedophilic content, as well as material harmful to children or otherwise unlawful under prevailing statutes.”
As the issue snowballed, it sent a notice to X on January 2 asking it to remove the derogatory images. The platform was prima facie found guilty of non-adherence of provisions as under Information Technology Act, 2000 and the IT Rules, 2021. MeitY sought an Action Taken Report, and gave 72 hours to X to take down the objectionable content. If X failed to comply with these directions, it would lose its status of a “safe harbour,” it warned.
What is the safe harbour clause?
Section 70 of the IT Act protects intermediaries or social media platforms and exempts them from any liability for third-party content that is posted on their platform. The intermediary platform is also expected to follow due diligence and other guidelines from the government to be able to claim safe harbour domain. This sanctuary lapses if there is knowledge that information or data connected to the computer resource of the intermediary is being used to commit an unlawful act and despite being notified of this, the platform fails to remove or disable access.
Is this clause being misused?
IT minister Ashwini Vaishnaw has repeatedly pulled up online platforms that take refuge under the safe harbour principle linking it to the rise in fake news. Last November, Vaishnaw underscored how Indian society required different metrics and the need to be more extractive. “Shouldn’t there be more responsibility on the platform?” he asked.
Industry experts believe that default exemption to intermediaries will likely be given only on a case-to-case basis once the overarching proposed Digital India Act (DIA) is passed. The DIA is eventually expected to replace the current IT Act; however, there’s no set date for that. The Digital Personal Data Protection Act governs all AI issues now but publicly available personal data falls outside several consent requirements, allowing such data to be processed.
Meanwhile, responding to the backlash, X owner Elon Musk has said that anyone using Grok to make “illegal content will suffer the same consequences as if they upload illegal content,” placing the onus entirely on users. The 72-hour deadline has ended and it now remains to be seen what X does to prevent similar recurrences.
So who is liable?
Musk has argued that Grok is just the mere “pen and not the person holding it,” and therefore cannot be punished on users’ behalf. Policy experts have agreed with the analogy while cautioning platforms like X to act responsibly and ensuring necessary safeguards against generation and distribution of inappropriate content are in place.
Ultimately, how AI tools are used depends on the users. However, if platforms are unable to stop chatbots from producing illegal content on demand, the government may require feature suspensions or third-party safety audits before AI tools are allowed to operate publicly. It also raises questions whether image-generation tools on social media platforms can ever be 100% safe from misuse. Legal recourse must also be an option for users who have been violated due to deepfakes.
While X as a platform may be entertaining obscene content, Kazim Rizvi, founding director of The Dialogue, a public policy think-tank, said, it is distinct from xAI, Musk’s AI company that created Grok which enables editing of images based on user prompts. The AI platform will not fall under the safe harbour subset because it doesn’t post user content and therefore the tag cannot be revoked from it.
France, Malaysia launch probes
Meanwhile France and Malaysia have initiated probes into Grok. Politico reported that French authorities will investigate the proliferation of sexually explicit deepfakes on X. Under the EU’s Digital Services Act, violations can attract fines of up to 6% of global turnover. Malaysia’s Communications and Multimedia Commission has also launched a probe into what it calls “serious concern” over AI-generated indecent content involving women and minors, reported online platform Techloy.
