In today’s rapidly-evolving digital landscape, artificial intelligence (AI) holds immense potential for the banking and financial services industry. From fraud detection and risk assessment to personalised customer experiences, AI could offer a multitude of benefits.
However, to ensure that AI is developed and used responsibly, it is imperative to find a balance between regulation and innovation. There is an urgent need for India to proactively take the lead in AI regulations as many countries globally look towards us for leadership in technology. AI regulations in the form of light-touch guardrails can provide a framework that ensures responsible and ethical AI deployment, safeguards for customer and institutional data privacy, protection of consumers against algorithmic biases and incorrect information, and establishing mechanisms for accountability and transparency. Let’s understand these concerns better.
Prevention of Bias
AI systems can inadvertently perpetuate existing biases present in the data they are trained on. If the training data reflects societal biases, the AI models can learn and amplify these biases, leading to discriminatory outcomes. Biased AI systems can have significant social and economic impacts, particularly in sectors such as lending, and unchecked bias in AI can reinforce inequalities, exacerbate discrimination, and limit opportunities for marginalised communities.
India has worked hard to create anti-discrimination laws, and it is essential that AI systems do not violate these through reliance on biased datasets. So, there is need for a baseline for anti-discrimination and inclusion policies in AI development and deployment. Algorithmic accountability and the right against discrimination must be included in the framework. Data collection practices must be standardised and disclosed to minimise biases and account for underrepresented groups.
Also Read: Income Tax Return Filing: 10 benefits of filing ITR even when not compulsory
Compliance with Privacy Rules
AI systems must ensure that collection and use of personal data of individuals complies with extant Indian privacy laws including sector-specific privacy norms. The framework covering data privacy standards, consent-based mechanism, and data security measures covering encryption, secure data storage, access controls, and incident response protocols to safeguard sensitive financial information from unauthorised access or data breaches must be extended to the use of AI systems as well.
Protection from Incorrect Information
Consumers have a right to be protected from harm caused by incorrect information generated or displayed by AI systems and right to redressal in such cases as we are already seeing with litigation in other countries. There should also be tools for managing real time user-reported errors. This necessitates the need for establishing standards for testing, verification, and certification of AI systems to mitigate risks and ensure their proper functioning. The industry and the regulators must work together to develop Self-Regulatory Organisations (SROs) and audit and certification processes that identify and provide market incentives to ethical and trusted AI platforms. For high-risk AI systems, sandboxes can provide a controlled environment for testing new AI applications, while allowing the ecosystem to understand and assess potential risks.
Credit Sharing and Monetisation
There is a clear need for sharing of credit and revenue with local publishers for content utilised by AI systems. This starts with transparency including disclosing that the content was generated by AI, publishing detailed credits to, links to, and summaries of local data sources used and sharing of revenue with local Indian content creators. We are already seeing class action lawsuits in other countries pointing out that the development and deployment of AI by several AI companies violates the rights of millions of artists and content creators. The images and content were scraped without consent, often from copyright-protected websites and platforms, without compensation or attribution or credit to the content creators.
India needs local content creators to earn profit and grow and cannot allow their published content to be fodder for large AI systems to train on and retain the lion’s share of the monetisation and eyeballs arising from same. Globally, laws such as the News Media Bargaining code of 2021 and the EU Copyright Directive are being put in place to ensure fewer power imbalances between the various participants. It is essential that Indian content creators receive similar protection.
AI systems have a potentially transformational role to play in economies. At the same time, ethical choices that avoid discriminatory outcomes can and should be built into innovative technologies as a core objective right from the outset and not an incremental tweak. This requires coherent, coordinated, and collaborative effort led by a dedicated taskforce. Regulatory guardrails can play a vital role in building trust and collaboration between the AI platforms, financial institutions, and customers, and ensuring that development and deployment of AI at every level is fair, just, accurate, and appropriate. We hope that MEITY’s Digital India Act will lead the way on AI guiderails just as the well-thought-out Digital Lending Guidelines by RBI leads the way for a more responsible, equitable, and thriving digital lending ecosystem.
(The author is CEO, BankBazaar.Com. Views are personal)