Srinath Sridharan
The complexity of deepfakes lies in their ability to generate highly convincing, AI-generated content that can manipulate images, audio, and video to create realistic simulations of people saying or doing things they never did. They stand as a chilling testament to the power and peril of our digital age, and are a growing concern for society at large.
Deepfakes originate from generative adversarial networks (GANs), a technique featuring two neural networks in competition. The first network, the generator, crafts images or videos with a goal of utmost realism. The second, the discriminator, strives to differentiate between genuine content and what the generator produces. Over time, the generator refines its ability to create fakes that not only deceive the discriminator but also appear convincing to human observers. What was once the realm of experts has now been democratised, allowing virtually anyone with a computer to craft convincing fabrications.
With such technologies developing at rapid speed, identifying a deepfake will become increasingly difficult. Social media platforms, the primary arena for the dissemination of these deepfakes, find themselves in a conundrum. The sheer volume of content posted on platforms is staggering, making the detection of deepfakes an almost Sisyphean task. Algorithms and content moderation teams grapple with the Herculean challenge of sifting through this digital flood, often with limited success.
Women, in particular, have been disproportionately affected by non-consensual deepfake creation, leading to instances of revenge porn, harassment, and privacy invasion. Celebrities, on the other hand, face the risk of manipulated content damaging their public image or reputation. Moreover, individuals from all walks of life, including corporate leaders, politicians, activists, and ordinary citizens, can fall prey to malicious deepfake campaigns that can lead to personal, professional, and even societal consequences. One recent example involving actress Rashmika Mandanna illustrates the depth of this predicament. While the video was later debunked, the damage was done, and Mandanna became a victim of this digital deception. Such incidents underscore the need for urgent action to combat the growing menace of deepfakes.
The government advisory issued after this incident states that the social media giants should take necessary steps to remove content from their respective platforms which in any way depicts impersonation of any kind or artificially morphed images of people within 36 hours from the receipt of a complaint. India will need to bring these learnings to update its laws around emerging technologies, especially AI. The gap lies in the existing legal framework’s inability to fully address the complex and evolving challenges posed by AI, including deepfakes. While some provisions in the IPC and IT Act can be used to address specific issues, they may not comprehensively cover the broad range of AI-related concerns, such as ethical dilemmas, data privacy, security, and the misuse of AI technologies.
Regulating deepfakes presents a formidable technological challenge. First and foremost, the rapid evolution of deepfake generation techniques constantly outpaces detection methods. As creators refine their algorithms, the ability to distinguish between real and manipulated content becomes increasingly difficult. Deepfake creators often remain anonymous, making it hard to hold individuals or entities accountable for harmful content. Moreover, the sheer volume of content on social media platforms makes real-time monitoring and regulation a daunting task. The deployment of AI and machine learning for detection is promising, but it requires significant resources and ongoing development to keep pace with the ever-advancing state of deepfake technology.
Balancing the preservation of free expression with the prevention of malicious deepfake dissemination further complicates the regulatory landscape. AI-generated content has legitimate applications in fields like entertainment and digital art. Regulating deepfakes risks stifling innovation and creativity in these areas. Thus, a balanced approach that leverages user vigilance, legal standards, and responsible content moderation is essential for addressing the deepfake dilemma on these platforms.
Social media platforms have the potential to take down deepfakes, but only when certain conditions are met. The removal of deepfake content heavily relies on two critical factors: user reports and legal frameworks. In cases where deepfakes violate local laws or community guidelines, reported instances can be addressed through a structured review process. However, the sheer volume of content uploaded daily makes proactive detection a significant challenge. Without user reports, authenticating every video as a potential deepfake would be a logistical nightmare for social media platforms, and many legitimate posts could be unjustly targeted.
The battle against deepfakes is far from over, and it is not a fight that social media platforms can tackle single-handedly. Collaboration between tech companies, governments, and cybersecurity experts is essential to develop robust detection mechanisms and legal frameworks. Education is equally important, as the public must be vigilant consumers of digital content.
As we grapple with this digital dilemma, one thing is clear: deepfakes are not a passing trend. They represent a fundamental shift in the way we perceive reality in the digital age, and the challenges they pose are anything but illusory. In a world where reality can be rewritten with the stroke of an algorithm, the value of truth becomes the currency of trust.
(The author is Corporate advisor and policy researcher)