As artificial intelligence (AI) is being used on a large scale, OpenAI reports that AI is being used by cybercriminals. According to a recent report published by the AI company, ChatGPT is being used to assist in fraudulent activities. Here’s a look at how ChatGPT can pose a threat to you.
ChatGPT: A threat in disguise
OpenAI released a report named “Influence and Cyber Operations: An Update,” highlighting the negative usage of ChatGPT. It states that cybercriminals are using ChatGPT to code related to fraudulent activities. It is used for developing malware, executing social engineering attacks and doing post-compromise operations. ChatGPT is being exploited for such frauds to gain access to your data. Also the AI-powered ChatGPT is being used to create deep fakes. For example it might create a fake profile of someone you know. Then ask you for some form of money, thus landing you in money laundering activities.
OpenAI faced about 20 fraud cyber operations by using ChatGPT since the start of 2024. Most of these cases included social engineering and email phishing. The first case of AI-powered attacks emerged in April 2024. The cybersecurity firm Proofpoint identified TA547, a Chinese cyber-espionage group also known as “Scully Spider,” using an AI-powered PowerShell loader for their malware chain.
But how is ChatGPT being misused? So, the criminals use ChatGPT’s code-generation abilities and natural language processing (NLP) to complete tasks that would typically require special technical expertise. ChatGPT is also used for assistance in developing Python scripts custom bash. This helps the scams to evade fraud detection strategies.
The safety way ahead
For the past few years artificial intelligence (AI) has been adopted by each and every organization. From the big 4 to small startups, each of them are using AI in their work structure. In spite of its positive usage, AI is being misused. ChatGPT could be a savior when it comes to creating content and generating ideas. However, it seems some of these ideas are being misused. These ideas are then eventually used to harm you. So, how do you keep yourself protected? Given below are the safety nets you need to weave around you to keep yourself safe:
- Do not reply to suspicious texts. Even if it’s from saved contacts, always crosscheck before you reply back.
- Do not answer calls from unknown numbers. There are many cases where AI has been used to copy the voice of your known people. Avoid these calls. It’s always advisable to install Truecaller for verifying such calls.
- AI and original content have minor differences. Look for these changes when suspicious about any deepfake related attacks.
Apart from these, OpenAI has also taken some initiative to stop the misuse of ChatGPT. To address the rising threat it has closed down accounts involved in these fraud operations. In addition to this it has also shared relevant Indicators of Compromise (IOCs), such as attack methods and IP addresses with cybersecurity partners. Furthermore, OpenAI will also strengthen the monitoring structure.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook