Ahead of polls, experts urge EC to check GenAI misuse

In a joint letter, the policy groups have urged the EC to direct intermediaries and platforms such as X, Google, Meta, Snap, OpenAI, among others to have internal processes in place to identify and implement reasonable, proportionate, and effective mitigation measures.

Online voting, GenAI, GenAI use in LS polls 2024,
Basis non-action, the platforms will also not be entitled to protection under the safe harbour clause of Section 79 of the IT Act. (Image: Reuters)

Ahead of the general elections, policy advocacy groups such as Internet Freedom Foundation (IFF), Software Freedom Law Center (sflc), Access Now, along with independent experts, have urged the Election Commission and social media intermediaries to take strict measures to curb the menace of generative AI and manipulated media.

This is because deepfakes, manipulated media content, and misinformation, have the potential to influence the voters during elections. In a joint letter, the policy groups have urged the EC to direct intermediaries and platforms such as X, Google, Meta, Snap, OpenAI, among others to have internal processes in place to identify and implement reasonable, proportionate, and effective mitigation measures.

“Intermediaries and platforms must setup an internal taskforce for elections-specific risk mitigation measures.The team should cover areas relating to cybersecurity, threat disruption, content moderation and disinformation,” the groups said in letters to EC and platforms.

Besides rolling out fact checking labels and internal taskforce, the policy groups have asked platforms to implement measures such as media literacy initiatives for users, providing users access to information sources, details about electoral process, distinguishing between AI-generated content and normal content, media literacy initiatives, among others.

“Such measures are essential to ensure that elections in India remain free from manipulation, thereby upholding the democratic principles that are the cornerstone of our nation,” the groups said.

Currently, as per IT Rules, intermediaries are required to take all reasonable measures to remove or disable access to content such as misinformation, involving impersonation, sexual abuse, deepfakes, among other illegal content, hosted, published, and transmitted on their platforms, within 24 hours of receiving a complaint. Non-compliance of the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users, when identified, including but not limited to prosecution under the IT Act and several other statues of the criminal code.

Basis non-action, the platforms will also not be entitled to protection under the safe harbour clause of Section 79 of the IT Act.

“In the face of this growing concern, we call up on X, Meta, Google, Snap, OpenAI, to take immediate and decisive action to ensure that the policies and practices robustly counter the menace of deepfakes, generative AI, and manipulated media content,” the groups said.

Lately, the platforms have announced measures to curb the spread of misinformation during the election season. Earlier this month, Google said its generative AI platform Gemini will have restrictions on responses for some types of election-related queries.

With its ads policies, Google said it is prohibiting the use of deepfakes or doctored content that can lead to user harm. “As more advertisers leverage the power and opportunity of AI, we want to make sure we continue to provide people with greater transparency and the information they need to make informed decisions,” Google said.

Similarly, YouTube now also requires creators to disclose when they have created AI-generated, altered or synthetic content.

Meta said, over the years it has rolled out industry-leading transparency tools for ads about social issues, elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform, to help combat the spread of misinformation.

“We joined forces with the Misinformation Combat Alliance (MCA) to introduce a WhatsApp helpline to deal with AI-generated misinformation, especially deep fakes, providing a platform for reporting and verifying suspicious media,” Meta said in a recent blog.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on March thirty, twenty twenty-four, at fifteen minutes past one in the night.
Market Data
Market Data