Cyberattacks can be devastating; Here’s how AI could make them worse

Phishing has long been a potent weapon in the cybercriminal arsenal

AI tools will enable phishing emails to be highly localized
AI tools will enable phishing emails to be highly localized

By Rohit Aradhya

Cyberattacks are expensive. For many victims the cost of containing, neutralizing, and recovering from an attack can run into millions of dollars – and that’s before you consider the impact on brand reputation, employee morale, and more. Our latest international research among businesses with 100 to 5,000 employees found that the average annual cost of responding to security incidents in 2023 was $5.34 million USD. Most of the companies surveyed reported that attacks had become more sophisticated (62%) and severe (55%), with incidents taking longer to investigate and fix.

As attackers start to leverage AI to scale the volume, speed and sophistication of their attacks these trends will continue and accelerate.  Another key risk is that AI-enabled cyberattacks can adapt and learn from the defences they encounter. For example, traditional defences use pattern/signature-based blocks, which AI-enabled attacks can learn and find ways to circumvent.

Half of the companies surveyed anticipate that AI will empower hackers, yet only 39% feel that their security infrastructure is adequately prepared to respond to automated AI-driven threats. This is a bleak statistic but is not as worrying as it might seem. If you know you have security gaps, you can address them.  

And the time to address them is now. Our new CISO guide to AI in cybersecurity highlights ways in which cyber attackers can harness AI technologies in their attacks:

  1. To develop and launch more convincing email attacks. Phishing has long been a potent weapon in the cybercriminal arsenal and part of its success is down to its ability to evolve over time. The main application of AI in phishing, spear phishing and business email compromise is to automate content generation. Generative AI can be used to create personalized and contextually relevant messages, which can increase the likelihood of success. AI tools can also help in the spoofing of legitimate email addresses, trawling through public information to identify targets and tailor attacks, and mimicking communication patterns to deceive recipients. The absence of grammatical errors in AI-generated text adds a layer of sophistication and makes it even harder for traditional security measures that depend on human-induced anomalies to identify malicious messages.
  1. To create or adapt malicious code. The advent of malicious AI-driven tools such as WormGPT and EvilGPT signals a shift in cyber threats. These tools will empower adversaries to automate vulnerability discovery and exploit weaknesses, leading potentially to a surge in zero-day attacks. AI could further enable the creation of adaptive malware, malicious code that can change its behavior to evade detection. Other examples of AI-driven malware attacks include the generation of unique and polymorphic malicious attachments, dynamic malware payloads that adapt to the target environment, and content obfuscation to bypass static analysis tools.
  1. To build bigger botnets for DDoS attacks. The increased coordination and automation capabilities of AI-powered botnets could amplify the potential for massive distributed-denial-of-service (DDoS) attacks. AI-powered botnets can systematically avoid CAPTCHA tools and proof-of-work mechanisms. They can also evolve to avoid traditional algorithms that are based on historical datasets to identify bots.
  1. Deepfakes. AI-generated deepfake videos and audio have emerged as powerful tools for impersonation. Anyone with access to relevant video footage and audio recordings can use AI-powered tools to create realistic fake images and voice simulations. By embedding such deepfakes into phishing messages, attackers can create highly convincing content to deceive recipients. For example, AI-enabled voice fraud could simulate thought leaders and influencers to spread scams or disinformation across social media platforms. Other deepfake scams could lead to direct financial losses for companies.
  1. Content localization. AI tools will enable phishing emails to be highly localized, tailored to linguistic, cultural, and industry-specific contexts. This could include multilingual phishing emails, regionalized content with cultural references, industry-specific jargon, and references to local brands and institutions – all designed to enhance the apparent authenticity of phishing attempts and make them more likely to succeed.
  1. Access and credential theft. Many cyberattacks begin with access and credential theft to provide the attackers with access to an account and the network beyond. AI tools ca help attackers to achieve this goal in several ways.  For example, attackers can leverage AI to create highly convincing fake login pages resembling legitimate websites. They can expand credential stuffing attacks with the high-speed testing of large volumes of username and password combinations obtained from data breaches. AI-based password cracking or CAPTCHA-defeating tools can attack more efficiently than traditional methods, further complicating defence mechanisms.
  1. Poisoned AI training models. AI models used for applications depend on a large set of training data to train/retrain the models. Good data will give good outcomes. If data security is breached and the data is poisoned or distorted maliciously by attackers who introduce noise or distortions into the input data, the AI-enabled systems could deliver dangerous and unpredictable results. Data security is of paramount importance in AI-enabled organisations, especially those that rely on AI for automated decision making, for example in connected IoT systems such as traffic signals, flow control mechanisms, and more. 

Conclusion

As organizations prepare for an AI-driven world, it is important to understand how attackers might abuse AI tools and technologies. This will enable companies to harden their defences and adapt their security detection and prevention methods correctly.  AI offers many business benefits, and its value to cybersecurity is both immense and proven. We don’t need to fear attackers using AI, we just need to be ready.

The author is VP engineering and managing director, Barracuda Networks

Follow us on TwitterFacebookLinkedIn

This article was first uploaded on March twenty-nine, twenty twenty-four, at thirty-three minutes past eleven in the morning.

/