AI tools like ChatGPT and Claude have democratised coding through “vibe coding” – the art of building software with simple, conversational prompts. While big tech CEOs have been going gaga over this new emerging trend, which promises to let non-coding people try their hands at building apps, a darker counterpart has emerged. It turns out that just like developers and coders, scammers are also fond of taking advantage of using these AI tools to make scams a possibility. Dubbed “vibescamming” by cybersecurity experts at Guardio Labs in early 2025, this phenomenon flips the script.
Vibescamming basically enables even novice cybercriminals to craft polished phishing attacks, malware, and full-blown scam campaigns with minimal effort. As AI agents become more accessible, vibescamming is lowering the barriers to entry for fraudsters, making online deception more scalable, convincing, and dangerous than ever before.
What started as a buzzword in tech circles has quickly become a real-world threat, with reports of AI-generated scams surging throughout 2025.
How vibescamming works
At its core, vibescamming mirrors vibe coding. In vibescamming, scammers use natural language prompts to instruct AI agents – such as ChatGPT, Claude, or more vulnerable platforms like Lovable – to produce malicious content. Guardio Labs’ VibeScamming Benchmark, released in mid-2025, tested popular AI models’ resistance to such abuse.
ChatGPT scored high (around 8/10) due to strong guardrails that block harmful requests, while Claude rated medium (4/10), and Lovable alarmingly low (1.8/10), allowing users to generate live scam pages with ease.
Here’s an example: Scammers can prompt AI to create exact replicas of login pages for services like Microsoft or major banks, complete with HTML, CSS, and JavaScript for data theft. In one documented case from April 2025, Lovable was used to build full-stack phishing sites that went undetected longer than traditional scams, thanks to AI’s ability to eliminate telltale signs like poor grammar or amateur design.
Other tactics include generating personalised scam emails, SMS phishing (smishing), or even basic malware scripts. The process is iterative – if a site gets flagged, a quick re-prompt yields a variant, making it highly scalable.
A growing threat as a new cybercrime superpower
By 2025, reports from sources like The Hacker News and LMG Security highlighted a wave of AI-fueled “vibe hacking” – a related term encompassing broader cybercrimes like automated extortion campaigns using models like Claude. In one example, attackers used AI to orchestrate a “Claude extortion campaign,” generating tailored threats at mass scale. Another case involved “LameHug,” where AI bypassed traditional defenses like endpoint detection and response (EDR) by creating novel, undetectable malware variants.
The democratisation of crime is perhaps the most alarming impact. Traditional phishing required technical expertise. Now, anyone with AI access can launch sophisticated attacks. This has led to a spike in polished scams, with victims falling for fraud that feels eerily legitimate. Economically, global cybercrime losses exceeded $8 trillion in 2025, per estimates, with AI-amplified scams contributing significantly.
Organisations face outpaced defenses. Signature-based antivirus struggles against AI’s rapid iterations, as noted in ThreatLocker’s September 2025 blog. For individuals, the risk is personal – from stolen credentials to financial ruin. As AI models evolve without foolproof safeguards, experts predict vibescamming will continue to rise in 2026, potentially integrating with deepfakes or voice cloning for superior deceptions.
How to stay safe from vibescamming
In an era where AI makes scams harder to spot, vigilance and proactive measures are key. Here are essential tips to protect yourself:
– Verify before you click. Always check URLs directly – hover over links in emails or messages to ensure they match the legitimate site (e.g., “bankofamerica.com” not “bankofamreica.co”). Use bookmarking for frequent sites instead of clicking emailed links.
– Enable Multi-Factor Authentication (MFA). You should add an extra layer of security to accounts. Even if credentials are stolen, scammers can’t access without your phone or authenticator app.
– Be skeptical of urgency and unsolicited requests. Vibescams often use fear or greed tactics, like “urgent account verification” or “prize winnings.” Pause and contact the company through official channels.
– Install reputable antivirus software with AI-detection features, and enable browser extensions like Guardio or uBlock Origin to block phishing sites. Keep software updated to patch vulnerabilities.
– Stay informed via resources from cybersecurity firms like Guardio or government alerts. If targeted, report to authorities and platforms like Google or Microsoft to help dismantle scams.
