‘Defensive AI will keep firms a step ahead of threat actors’

In this interview, Vaibhav Tare, chief information security officer, Fulcrum Digital, tells Alokananda Chakraborty how organisations can deploy artificial intelligence to enhance their threat detection capabilities. Edited excerpts.

Vaibhav Tare
Vaibhav Tare, chief information security officer, Fulcrum Digital. (Photo source: Financial Express)

A study by Darktrace reveals that 88% of security leaders in organisations anticipate the inevitability of offensive AI and over 80% of cybersecurity decision-makers agree that organisations need advanced cybersecurity defenses to tackle offensive AI threats. In this interview, Vaibhav Tare, chief information security officer, Fulcrum Digital, tells Alokananda Chakraborty how organisations can deploy artificial intelligence to enhance their threat detection capabilities. Edited excerpts.

Businesses have relied on AI to fight fraud and financial crime for decades. What’s this recent hullabaloo around defensive AI?

The advent of defensive AI represents a paradigm shift in this domain. Unlike traditional approaches that primarily focus on identifying patterns and anomalies, defensive AI proactively defends against adversarial attacks and evolving fraud tactics by incorporating techniques from game theory, adversarial machine learning, and cybersecurity.

Defensive AI models are trained not only on historical data but also on simulated adversarial attacks, enabling them to anticipate and counter evolving fraud patterns. Moreover, techniques like adversarial training and defensive distillation enhance the resilience and robustness of these models against adversarial attacks. The increasing sophistication of fraud tactics, the rise of adversarial attacks on AI systems, and regulatory pressure have fueled the interest in defensive AI. The high-stakes financial implications of fraud and the advancements in AI and machine learning have further accelerated its adoption. Defensive AI’s interdisciplinary nature, drawing upon expertise from various fields, has sparked collaboration and innovation, driving progress in this crucial area.

Which are the areas where AI can improve security?

A key area is threat detection – AI can analyse massive datasets to identify potential attacks, even new and evolving threats that may evade human analysts. Pattern recognition can be trained into machine learning models using historical data to catch any deviations from normal behaviour. AI also enhances data protection by safeguarding sensitive information through advanced encryption, monitoring data access, and identifying unauthorised users.

Network monitoring is another crucial application. AI systems can vigilantly track all network traffic, scrutinising data flows for malicious payloads or suspicious activity that may indicate a breach. This 24/7 monitoring augments human teams. AI is also invaluable for threat mitigation – rapidly detecting compromised devices or data exfiltration attempts and initiating automated responses to contain the threat.

Also Read

Looking ahead, AI is set to play a vital role in predictive cybersecurity. By processing vast amounts of security data, AI can assess an organisation’s attack surface and vulnerabilities to forecast potential threats. This allows proactive defense measures before incidents occur. Additionally, AI enables robust user authentication through behavioral biometrics and continuous identity verification. As cyber threats grow more sophisticated, the scale and automaton that AI provides will be essential for comprehensive digital defense.

What are the key steps in integrating defensive AI into an organisation’s cybersecurity framework?

Integrating defensive AI into an organisation’s cybersecurity posture is a multi-faceted process. It begins with a comprehensive threat assessment to identify and prioritise potential cyber risks. Relevant data must then be collected from various sources like network traffic, logs, and threat intelligence feeds. This data needs to be preprocessed and organised into secure pipelines for training AI models.

Model development is a critical step, leveraging techniques like adversarial training to build robust detection capabilities aligned to the prioritised threats. Rigorous testing and validation are also essential to ensure the AI models perform accurately and resist evasion attempts. Once validated, the models need to be integrated with existing security tools and platforms, enabling coordinated detection and automated response workflows.


Continuous monitoring of the deployed AI models is crucial. Feedback loops help identify novel attacks or blind spots that the models missed. This intelligence informs the retraining and updating of models to keep pace with evolving threats. Strong governance frameworks, focusing on transparency, privacy and ethical AI practices, must be instituted in parallel.
Finally, upskilling cybersecurity teams, fostering cross-functional collaboration, and industry knowledge sharing are vital for effectively operationalising defensive AI capabilities within organisations. A cohesive strategy spanning people, processes and technology is key to realising AI’s full potential in cybersecurity defense.

How does defensive AI ensure that the enterprise is one step ahead of offensive AI? In other words, what can AI do to prevent being learnt by attackers?

With CheckPoint Research’s 2023 Mid-Year Cyber Security Report revealing an 8% spike in weekly global cyberattacks in the second quarter of the year alone, it is clear that cyber threats continue to evolve at an unprecedented pace. Integrating defensive AI has become crucial for robust cybersecurity.

The first step is conducting a comprehensive risk assessment to identify potential threat vectors and prioritise areas requiring AI-driven defenses. Organisations must then collect and preprocess relevant cybersecurity data from sources like network traffic, logs, and threat intelligence feeds. This data fuels the development and training of AI models tailored to detect and respond to prioritised threats, using techniques like adversarial training for enhanced robustness. Regular testing, validation, and patching of the AI models are vital, in assessing performance against known attacks. Once validated, these models can integrate with existing security infrastructure, enabling automated detection, analysis, and response capabilities, cutting down on response time and potential losses.

Consistent monitoring tracks the real-world efficacy of deployed models, with feedback loops identifying novel threats or evasion tactics. This intelligence facilitates periodic retraining to keep models updated against the evolving landscape. Governance ensuring ethical, transparent and accountable AI use is paramount.

Will it strengthen the AI defenders’ hands if AI vendors worked directly with the cybersecurity industry? Which are the areas where we need to collaborate closely?

AI vendors can strengthen cybersecurity using capabilities such as machine learning, pattern detection, and behavioural analysis and so on to create highly sensitive, adaptive and proactive cybersecurity tools. These in combination with cybersecurity access can create elevated solutions such as threat intelligence sharing, vulnerability assessment, incident response, and regulatory compliance, which can adapt to emerging threats, identify system weaknesses, and streamline incident detection and response. This can be used to help enterprises in achieving and maintaining regulatory compliance, ensuring adherence to data protection and privacy standards. Real-time threat detection can further empower organisations to identify and mitigate security threats effectively.

A secure cyber future really depends on the collaborations between AI vendors and cybersecurity firms fostering innovation. These collaborations lead to the development of tailored solutions that meet the evolving security needs of organisations, as each half focuses on bringing forth the most out of their expertise.

Get live Share Market updates, Stock Market Quotes, and the latest India News
This article was first uploaded on May two, twenty twenty-four, at thirty minutes past nine in the morning.
X