As technology continues to advance, its dark side is also being revealed. With technology such as generative AI (artificial intelligence) and predictive AI, among others, experts believe technology might take a wrong turn if not regulated. It is believed that deepfake continues to progress, especially with generative AI making it easier for anyone today. With a few prompts or a few edits scammers can create deepfake application. Face-swapping software such as Icons8 FaceSwapper, deepswap, Faceswapper.ai and Pixble, among others, are some of the most common examples. The most basic versions crudely paste one face on top of another to create a ‘cheap fake’. “deepfakes pose significant cybersecurity challenges for businesses. Executives should be concerned as these artificial intelligence (AI)-generated impersonations can be used for fraudulent activities, misinformation, or even to manipulate decision-making processes within an organisation,” Kumar Ritesh, founder and CEO, cyfirma, told FE-TransformX.
Reports on ‘Cheapfake’ use cases
deepfake fraud attempts have increased by 31 times in 2023, which is about a 3,000% increase year-on-year, as per insights from a new report by Onfido, an ID verification unicorn based in London. It further suggested that easy or less sophisticated fraud accounts for 80.3% of all attacks in 2023, about 7.4% higher than last year. User cases include deepfakes of friends and relatives to impersonate or convey the need for some money. The money is requested through a voice call (which has an AI-powered voice) followed by immediate action required to transfer the money.
Furthermore, experts opine that deepfakes are being used more frequently in cybercrime. A 2022 survey found that 57% of global consumers claimed they could detect a deepfake video, whilst 43% said they would not be able to tell the difference between a deepfake video and a real video, as per insights from Statista. Apart from these, deepfakes of immediate bosses or leaders at workplaces can be used to carry out similar activities. It is believed that deepfake are also capable of matching the voice tone as well as the accent of the individual that they are impersonating.
Another use case is using deepfakes to create sensational fake news which garners more attention than subsequent debunking, leaving individuals with lingering doubts. One of the examples of using deepfakes in influencer marketing is a malaria awareness video campaign with David Beckham speaking nine languages with nine voices (not David Beckham himself). Beckham’s face was imposed on other speakers’ faces with his original voice to personalise the appeal and stay transparent about using the Generative Adversarial Networks(GAN).
Experts believe misinformation remains a big problem in India. With deepfake’s capability of generating content easily, there are concerns that it may be used to create content that could incite anger among different communities. “These misguiding videos usually show a controversial statement which was not said by the respective individuals. Similarly, there have been deepfakes of Bollywood celebrities such as Katrina Kaif and Rashmika Mandana. deepfakes of your friends and relatives to impersonate or convey the dire need for some money. The money is requested through a voice call (which has AI-powered voice) and there’s an immediate action required to transfer the money,” Pankit Desai, co-founder and CEO, Sequretek, said.
The loopholes
Industry experts believe that the evolving nature of deepfake compounds the challenges for automated detection systems, leading to increased difficulty, particularly in the face of contextual complexities. deepfakes can create issues such as slut-shaming and revenge porn, which might bring a negative impact on individuals’ reputations and self-image. These sensitive challenges demand comprehensive legal frameworks to address evolving threats and safeguard individuals.
And the strive continues…
Reportedly, within the next 10 days, the government would come up with clear actionable items on four pillars detection, of deepfakes, misinformation, how to prevent the spread of misinformation, how to strengthen reporting mechanisms, in-app reporting mechanisms to be strengthened, and increasing awareness. “ Ministry of Electronics and Information Technology (MeitY) has identified ‘detection, prevention, reporting, and awareness’ as the four-pronged approach to curbing deepfakes. Any regulation for deepfakes will have to necessarily ensure that it discourages dissemination, incentivises early reporting, penalises delay in addressing complaints and taking down deepfakes and restricts avenues for creation of deepfakes, among others,” Ranjana Adhikari, Partner, INDUSLAW, highlighted.
Deepfakes, arguably the most dangerous form of misinformation, pose unprecedented threats not just to democracy and its processes but also to the rights of digital users in online spaces. “ However, the inherent risk lies in the potential exploitation of powerful technology for malicious purposes. The beauty of technology lies in its adaptability, and it can serve as a countermeasure against such crimes. AI itself offers the solution to this very problem. Collaborating with various organisations, law enforcement, and government agencies, might help in finding a resolution,” Atul Rai, CEO and co-founder, Staqu Technologies, concluded.

