From smarter phishing to quicker malware development, AI is accelerating cybercrime as attackers exploit the new technology, a recent report from Google Threat Intelligence Group showed.
At the same time, Shane Huntley, chief technology officer at Google Threat Intelligence Group told FE, in the long run, AI may prove to be the defenders’ biggest advantage.
Attacker’s Productivity Multiplier
Findings from the report show that threat actors are increasingly integrating AI into their operations, using it to speed up reconnaissance, generate phishing messages in multiple languages, and assist in malware development. The technology is helping attackers automate routine tasks and scale campaigns faster, lowering the effort required to plan and execute intrusions.
“It’s more of an evolution rather than a revolution in the threat. What we haven’t seen so far is any great jump in the overall threat or success of attackers, but we are seeing this growing threat,” Huntley said.
Instead, the technology is primarily acting as a productivity multiplier, allowing threat groups to work more efficiently rather than fundamentally changing the nature of cyber-attacks.
This gradual shift is visible across the attack lifecycle. AI tools are enabling faster vulnerability research, improved social engineering, and more polished phishing messages, particularly in cases where language barriers previously limited attackers. State-backed groups and cybercriminal networks alike are experimenting with these tools to refine targeting and accelerate operational workflows.
At the same time, cybersecurity teams are deploying AI at scale to counter these developments. Automation and machine learning are increasingly being embedded into threat detection systems, email filtering, malware analysis, and large-scale monitoring platforms. By processing vast datasets and identifying suspicious patterns more quickly than human analysts, AI is helping defenders expand coverage and respond to threats earlier in the attack chain.
However, Huntley said, AI can be a defender’s advantage not an attacker’s advantage.
Scaling Resistance
“AI as a tool for defence actually has a great benefit by allowing us to scale, allowing us to look at more attacks, allowing us to really amplify what we’re doing on defence,” he explained.
This growing defensive use of AI could prove decisive. Unlike attackers, who must bypass multiple layers of protection to succeed, defenders can use AI to monitor networks continuously, flag anomalies in real time, and block threats across millions of users simultaneously. The ability to analyse attacks at scale and automate response mechanisms may help security teams offset the efficiency gains that AI is providing to adversaries.
Beyond enterprise customers, Google is also expanding its engagement with governments and public-sector organisations, which are increasingly relying on cloud-based threat intelligence and security platforms. Public-sector users form a significant part of the company’s security ecosystem, while ongoing engagement with policymakers aims to raise awareness and support the development of effective AI and cybersecurity regulation.
For enterprises, the emerging lesson is that AI adoption cannot be limited to productivity tools or customer-facing applications, Huntley added. Organisations deploying AI in their own operations must also account for new risks such as model extraction attempts, misuse of AI services, and increasingly sophisticated phishing campaigns. Awareness and defensive readiness are becoming as critical as innovation.
