According to an official release, Sophos, a cybersecurity platform, released two reports about the use of AI in cybercrime. The first report “The dark side of AI: Large-scale scam campaigns Made Possible by Generative AI” demonstrates how, in the future, scammers could leverage technology like ChatGPT to conduct fraud on a massive scale with minimal technical skills. However, a second report, titled “Cybercriminals can’t agree on GPTs,” found that, despite AI’s potential, rather than embracing large language models (LLMs) like ChatGPT, some cybercriminals are skeptical and even concerned about using AI for their attacks.
However, part of the reason behind this research was to get ahead of the criminals. By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyse and prepare for the threat before it proliferates,” said Ben Gelman, senior data scientist, Sophos.
Using a e-commerce template and LLM tools like GPT-4, Sophos X-Ops is expected to build fully functioning website with AI-generated images, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal users’ login credentials and credit card details. The website required minimal technical knowledge to create and operate, and, using the same tool, Sophos X-Ops was able to create hundreds of similar websites in minutes with one button.
“We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with skepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity,” Christopher Budd, director, X-Ops research, Sophos, concluded.