By Atanu Biswas

All of a sudden, we realised that deepfakes—a new AI avatar—existed, and with their ever-improving nature, they could potentially upend society. There were several disturbing deepfakes of high-profile Bollywood celebrities. Also, some doctored videos appeared to be trying to sway voters in some poll-bound states. No wonder that prime minister Narendra Modi recently referred to deepfake as the “new age sankat.”

Fake political propaganda or news and pornography are nothing new, though. A New York-based professional photographer was accused of pasting pictures of women’s heads onto nude bodies as early as 1888. Mussolini circulated a doctored picture of himself to the public against the backdrop of World War II. Social media platforms like Facebook, Twitter, and WhatsApp have dominated several elections throughout the world in the last decade or so, and a massive amount of fake audios, videos, or images have been relentlessly shared.

Deepfakes get to roll the die now. But what’s deepfake? A portmanteau of the terms “deep learning” and “fake.” Author, entrepreneur, and AI expert Nina Schick provided a concise definition in her 2020 book Deepfakes: The Coming Infocalypse. “A deepfake is a type of ‘synthetic media’, meaning media (including images, audio and video) that is either manipulated or wholly generated by AI.”

What has transpired with the advent of AI and deepfake technology is that numerous apps are now widely accessible, and creating deepfakes has gotten easier and cheap—even for amateurs. It’s reported that professional assistance is also available at a meagre cost. Furthermore, social media’s lightning speed to spread them and the internet’s and cellphones’ widespread wings in society empower this. The sankats are just getting intensified.

A widely reported 2019 study by Amsterdam-based cybersecurity company Deeptrace found that a shocking 96% of deepfakes were pornographic and that 99% of them mapped the faces of female celebrities onto porn stars. According to estimates from some other studies, 90-95% of videos produced are non-consensual porn or image-based abuse. Women, not necessarily always celebrities, make up the great majority of victims. Thus, is the biggest sankat from deepfakes the proliferation of fake pornographics?

Or is the possibility of influencing close and important elections and therefore altering the trajectory of geopolitics the greatest nightmare deepfake is dissipating? Even though we were worried about how deepfakes could affect the assembly elections in Madhya Pradesh or Telangana, the recent elections in Slovakia and Argentina demonstrated how the growing deepfake boom could endanger democracy. As was seen in Slovakia in September, deepfakes of candidates saying something untoward in the final moments of a close election could change the fate of that election. Social media in Argentina was inundated with AI-generated photos and videos, including deepfakes, during the November elections. AI “is now likely to be a factor in many democratic elections around the globe,” according to the New York Times article “Is Argentina the First AI Election?” by Jack Nicas and Lucía Cholakian, dated November 15.

Elections in the US, UK, and EU are scheduled for 2024. India too. Indeed, deepfakes have already crept into the realm of the US 2024 presidential contest. For instance, the campaign of Florida Governor Ron DeSantis released a video with AI-generated images of Donald Trump hugging Anthony Fauci, whom Republicans despise because of his role during the Covid pandemic.

Deepfakes have the ability to cause instability in the business sector as well. But as deepfakes blur the boundary between reality and fake world, maybe the biggest of the sankats is transforming the world into a place where we can Trust No One, as author and journalist Michael Grothaus discussed in his book of the same name from 2021. “Disinformation campaigns pose a significant threat to democratic processes, public health, and market stability,” according to NATO’s 2020 report Deepfake—
Primer and Forecast. Trust No One revealed a considerably more sinister aspect of deepfakes.

Some of the book’s chapters, including “The End of History” and “The End of Trust,” could be quite upsetting to read. However, in the last chapter, “The End of Life,” Grothaus raises moral questions when he talks about watching a short film about his father, who passed away in 1999 but was revived by deepfakes, only with a few hundred dollars in price. “Everything about deepfakes is complex—except for the expertise needed to create them,” as Grotahus noted.

Nina Schick also expressed in her book her belief that deepfakes had the potential to cause the greatest information and communications crisis in world history. It will soon be impossible to distinguish between reality and fakes in a world of deepfakes. Since then, this misinformation problem has been dubbed the “Infocalypse.”

What Infocalypse result will be the biggest? Maybe it goes beyond the faked reality. In fact, real reality becomes plausibly deniable. American law professors Bobby Chesney and Danielle Citron used the term “liar’s dividend” as early as in 2018, outlining the threats deepfakes pose to national security, privacy, and democracy. “Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence,” Chesney and Citron wrote. Nina Schick also stated in her book that “bad actors will be able to deny everything and yet attack anyone.”

Thus, we, the common people, find ourselves stuck in a shadowy region—in a realm of uncertainty. Will we eventually stop taking anything on social media at its face value as the line between real and fake becomes increasingly hazy? That could be Infocalypse, the greatest danger posed by AI and its deepfake avatar.

(The author is Professor of statistics in Indian Statistical Institute, Kolkata)