Artificial Intelligence (AI) has become increasingly prevalent in our daily lives and business operations. However, with everything good that comes with it, there is also a risk of exploitation. The rapid adoption of AI has caught the attention of cybercriminals.
Kaspersky- a global cybersecurity and anti-virus providers, in its latest research, underscores a pressing concern-the sophisticated use of AI by threat actors to carry out malicious activities. It has found that potential for AI to be used for offensive purposes is expanding, with cybercriminals finding innovative ways to exploit these technologies.
The report lists that AI systems like ChatGPT can be used to write and deploy malicious code. This allows for the automation of attacks, enabling threat actors to target multiple users more efficiently and effectively. Advanced AI can analyse data from smartphone accelerometers, potentially capturing sensitive information such as messages, passwords, and financial details without users’ knowledge. Further, AI-powered swarm intelligence can manage autonomous botnets. These networks can self-repair and restore after being disrupted, making them resilient to countermeasures.
AI’s capability to generate convincing content has also birthed new social engineering tactics. Cybercriminals are using AI to create phishing attacks that can overcome language barriers and tailor messages based on personal information procured from social media. These AI-generated scams can mimic specific individuals’ writing styles, making them harder to detect.
Deepfakes—a technology that creates highly realistic but fake audio and video—pose additional risks. From celebrity impersonation scams to impersonation of company executives in high-stakes financial fraud, deepfakes have led to significant financial losses and security breaches.
Not only can AI be used to launch attacks, but AI systems themselves are also vulnerable to various forms of cyberattacks Attackers can craft queries to bypass restrictions in large language models, leading to unintended or harmful responses. The hidden information in images or audio can mislead machine learning algorithms, causing them to make erroneous decisions.
Kaspersky’s latest research also highlights alarming developments in password security. With the publication of the largest known password leak—containing about 10 billion lines and 8.2 billion unique passwords—there is an increased risk of password-based attacks. Alexey Antonov, Lead Data Scientist at Kaspersky, reveals that 32% of user passwords are vulnerable to brute-force attacks with modern GPUs, while AI-trained models can crack 78% of passwords even faster. Only a small fraction of passwords are strong enough to withstand such advanced attacks. The report underscores that with AI being integrated into everyday products like Apple Intelligence, Google Gemini, and Microsoft Copilot, addressing these vulnerabilities becomes essential.
How to stay safe
Given the rising threat from these cybercriminals, here are key measures to protect yourself and your company:
Strengthen password security: Use complex, unique passwords and consider employing multi-factor authentication (MFA). Regularly update passwords and monitor for potential breaches.
Be wary of phishing attempts: Scrutinise unsolicited messages, especially those requesting sensitive information or containing urgent requests. Verify the source before taking action.
Educate and train: Regularly train employees and individuals on recognizing and responding to cyber threats, including deepfake content and social engineering tactics.
Update and patch systems: Ensure that all software, including AI tools, is up-to-date with the latest security patches to mitigate vulnerabilities.
Implement robust security measures: Employ advanced security solutions that can detect and counteract both known and emerging threats.