By Jayant Saran
A senior finance executive working in a private company received a voice message from an unknown international number. The message seemed to be from the promoter with a familiar voice and diction, clarifying that the sim card had been obtained for data usage while overseas. The voice message also asked for funds to be urgently transferred to a bank account overseas for a new deal to be closed. This was followed by a message with the details of the bank account.
The executive initiated the transfer process; however, something did not seem right! He decided to call the number from which the message had been received to seek confirmation. He did not receive any response. He then tried calling the promoter on his usual number and that was answered immediately. He discovered that the promoter had not requested any transfer and the transfer process was immediately aborted.
Many people have been receiving similar voice and video messages seemingly from family, friends, and colleagues asking for money. The modus operandi is to send messages late at night and create a sense of urgency to ensure that victims do not have enough time to think or seek verification. The videos and voices are near exact matches of the persons appearing to ask for funds. In many cases, fraud victims have transferred funds only to realise later that they have been defrauded. Several tools are available to create such fakes. In fact, apps and filters available on social media can also be used to create AI-generated videos. These are aiding unscrupulous elements in perpetrating fraud.
Defrauding people is not a new phenomenon. “Confidence men”, popularly known as con men, have been preying on their victims for centuries. They have always played on specific emotions, such as greed and fear, and in some cases, the inherent need for people to do good. Cyber fraud also tends to play on similar emotions. The emotional pressure created through the perception of a loved one being in trouble or a business deal that needs to be signed immediately, forms centerstage for the con. The big difference is that the perpetrator no longer needs to meet you physically – modern communication allows fraudsters to perpetrate their fraud without physically exposing themselves. Whether it is phishing attacks on emails, vishing on mobile calls, smishing through messages and now deepfakes, the perpetrator can be sitting miles away, and in many cases, in another country.
Awareness and knowledge are the only ways to deal with such frauds. If something seems too good to be true or does not seem right or normal, it is best to reconfirm. However, this is easier said than done. For example, banks and financial institutions continue to educate their customers. However, many customers still fall prey to phishing, vishing, and smishing attacks regularly, and the incognito nature of the perpetrators makes such cases near impossible to investigate and resolve. Everyone needs to make concerted efforts to continuously educate those in our ecosystem about such frauds, especially senior citizens who usually fall prey to such fraud schemes. Substantial efforts are also needed to educate children and young adults on the pitfalls of oversharing on social media or engaging with unknown people online. The continuous enhancement of Generative AI will bring with it further challenges. The onus is on us to ensure that this too does not become a tool for the unscrupulous.
The author is partner,head forensic technology, Deloitte India