With advancements in technology, comes drawbacks such as malicious activities of identity theft, malware attacks and much more. The cybercrime world seems to be continuously feeding on these advancements, without our knowledge. As tech companies continue to improve their own large language models (LLMs), reports suggest that there is a darkside to it. As reported by PTI, there has been a rise of ‘dark LLMs’ such as FraudGPT and WormGPT.
Artificial Intelligence(AI) generated phishing scams can look more convincing. With the advancements made in AI, it can copy human voice through voice and image cloning features. But how does these fraudulent activities work? The objective of the threat remains the same: either to harvest credentials through phishing campaigns or to cause disruption, exfiltrate data through malware, or extort money through ransomware. The difference lies in the increased potency of these attacks.
Understanding FraudGPT and WormGPT
According to sources, cybercriminals use dark LLMs to automate and enhance phishing campaigns, create sophisticated malware and generate scam content. To achieve this, they are expected to engage in LLM “jailbreaking” . Jailbreaking is the way of using prompts to get the model to bypass its built-in safeguards and filters.
But how does the FraudGPT and WormGPT work? FraudGPT can write malicious code for creating phishing pages and generate undetectable malware. It can also offer tools for orchestrating diverse cybercrimes, from credit card fraud to digital impersonation. Reportedly, FraudGPT is advertised on the dark web and the encrypted messaging app Telegram. Its creator openly markets its capabilities, emphasising the model’s criminal focus.
Another version, WormGPT, has the potential to produce persuasive phishing emails that can trick even vigilant users. Based on the GPT-J model, WormGPT is also used for creating malware and launching “business email compromise” attacks. Such scams are expected to mostly target phishing of specific organisations.
Furthermore, “The emergence of FraudGPT and WormGPT represents a new frontier in online threats. These dark AI models are designed to help with phishing attacks, fraud, and malware distribution. To protect themselves from these sophisticated threats, businesses must adopt a multi-layered security strategy,” Naveen Garg, Cybersecurity Reliability Engineer, Akamai Technologies, explained.
The safety road
In India, it is estimated that more than 30% of businesses have already faced threats from such malicious AI tools.This eventually emphasizes on the critical need for strong cybersecurity measures. So, how can you protect yourself from such thefts? Given below are some guidelines on how you can protect yourself from such scams:
- Opting for two factor authentication and click later will keep us safe from such threats.
- Stricter government regulations on AI are one way to counter these advanced threats.
- The use AI-based threat detection tools. This can monitor malware and respond to cyber attacks more effectively.
- You need to keep your software up to date is crucial for security.
Industry reacts
In spite of developments around LLMs and adding safety features, where does the loophole lie? In response to this “ With most enterprises adopting LLMs, it becomes critical to combat malicious LLMs and a good strategy would be to use AI.There is an increased sophistication and speed at which malicious LLMs generate phishing text or produce malicious code or malware. This makes it a compelling case to counter the frauds with AI-based threat detection and mitigation systems,” Sujatha S Iyer, manager – AI in Security, ManageEngine, Zoho Corp, explained.
“The IT Act, 2000 provides for legal recognition for transactions through electronic communication, also known as e-commerce.The Act was amended in 2009 to insert a new section, Section 66A which was said to address cases of cybercrime with the advent of technology and the internet.I believe implication of these acts can help to control the damages made through these LLMs,” Siddharth Chandrasekhar, Advocate and Counsel, Bombay High Court, highlighted.
“For those seeking comprehensive protection, leveraging the security services of industry giants like AWS and Azure is a wise move. These platforms have the financial resources and capabilities to defend against such sophisticated attacks, ensuring a higher level of security to your data,” Pawan Prabhat, co-founder, Shorthills AI, said.
Moreover, “Collaboration between technology providers, cybersecurity experts, and regulatory bodies is crucial to develop robust defenses against these advanced threats. The challenge is significant, but with proactive measures and continuous adaptation, we can mitigate the risks posed by dark AI,” Jaspreet Bindra, founder, Tech Whisperer, concluded.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook