Most organisations in India are working with first generation bots that have limited capability to learn on their own and are relatively easy to program.
As the world changes rapidly with technology at the epicentre of our lives whether we are eating, socializing, hiring or networking, the one question we cannot escape is, ‘When will artificial intelligence disrupt our lives and knowledge services?’ As per Deloitte’s India Corporate Fraud Perception Survey 2018, bots will revolutionise the way forensic due diligence has been traditionally undertaken sooner than we know by taking away mundane checks from humans, allowing them to use their expertise for more complicated issues.
The main objective of forensic due diligence is to uncover inaccurate information and fraudulent business practices through misrepresentation of facts. AI-powered bots can automatically review clusters of large data sets which are often filled with inaccuracies and information asymmetry in significantly less time than humans, making them ideal for use for various checks and screening. As fraud, misconduct and noncompliance continue to plague businesses and regulators increase their scrutiny, bots can help reduce the cost of compliance.
Despite the obvious benefits, the adoption of bots in the due diligence space has not been widespread. This is because India is still far away from a world where bots can provide a 360-degree end to end solution without human intervention for contextualising information and correcting false positives. Further, in case of disparate data sets, the efficiency of bots can slow down significantly and in India where digitization of records is far from adequate; this can pose a challenge in bot adoption for due diligence work.
Most organisations in India are working with first generation bots that have limited capability to learn on their own and are relatively easy to program. This presents an opportunity for understanding the potential risks of deploying bots in due diligence work over time. Should the bots get smarter going forth, they would become potential targets for fraudsters given the wealth of information they would have access to. In such cases, a control mechanism would need to be devised specifically to certain sensitive due diligence searches.
Further, bots in the future can be less objective, depending on who is programming them and reflect the biases of their programmer. From a regulatory perspective, access for bots to certain databases remains a sensitive issue and in the case of misuse, the liability could potentially fall upon the organisation to explain its actions and face potential legal recourse. While the debate on whether bots will replace humans has been ongoing for close to a decade, the complexity of tasks involved would require bot-human collaboration, as opposed to a bot takeover in the near future.
- By Nikhil Bedi, Partner, Deloitte India