– By Avimukt Dar, Raghav Muthanna and Himangini Mishra 

‘I am convinced we’re on the cusp of the most important transformation of our lifetimes’ is how Mustafa Suleyman described advancement of Artificial Intelligence technology/systems (“AI”) in his book ‘The Coming Wave’. The AI wave is here, and the only question that remains to be answered is if we are ready to ride it. As is the case with any major transformation, be it the boom of new age financial service offerings that led to multiple cases of identity frauds, theft and other data privacy concerns or the surge in bitcoins and other cryptocurrencies that resulted in theft and money laundering concerns, the negative consumer impact of new age technology services is amplified while the massive gains are normalised. Deep fakes, misinformation, data theft and IP infringement are some of many recent issues that indicate that AI is unfortunately no different. 

Thus, while regulators across key jurisdictions are still dabbling on ways and means to regulate AI, the European Union’s (“EU”) decision to enforce the world’s first standalone AI regulation, i.e. the European Union Artificial Intelligence Act (“EU AI Act”) earlier this year, was hailed by the market and well received globally. The EU AI Act regulates AI based on different classes of risks such as low risk, high risk and unacceptable risk, depending on the degree of risk posed by AI systems. While providers of low-risk AI are only required to comply with transparency obligations, providers of high-risk AI need to comply with onerous obligations such as implementing risk management systems and maintaining technical documentation. AI system providers employing prohibited practices including using manipulative techniques or exploiting vulnerabilities of a person that causes significant harm are unacceptable risk AI’s that are barred from being developed, imported into or exported from the EU. The EU AI Act, being a centralised act ensures that there is a clear classification dictating the obligations imposed on each class of AI systems and covers key aspects of their entire lifecycle. That said, a centralised act has its own peril, and it may not allow enough flexibility for regulation of evolving use-cases of AI across different sectors of business and societies. 

Moving further west, the United States of America (“USA”) often considered a leading jurisdiction when it comes to innovation and technology, does not have a standalone AI regulation at a federal level. The Biden administration has however, adopted several self-regulatory measures/guiding principles for regulation of AI. For instance, the voluntary commitments secured by the Biden administration from companies such as Google and Meta, and the issuance of AI Bill of Rights enumerates principles of trust, safety and security for development and deployment of AI. The Biden administration also passed an executive order on October 30, 2023, directing federal agencies to frame regulations for use of AI by private and government entities, however, so far there has been no development on this front. Interestingly, several states in the USA however have formulated legislations to regulate AI. Colorado recently introduced a legislation which requires developers of high-risk AI to protect consumers from any foreseeable algorithmic discrimination. 

In India, sectoral regulators including the Ministry of Electronics and Information Technology (“Meity”), the National Institution for Transforming India Aayog (“NITI Aayog”) and the Telecom Regulatory Authority of India (“TRAI”) have issued recommendations for establishing an AI regulatory mechanism in India. Interestingly, all of them endorse a risk-based approach for regulation of AI. Meity and NITI Ayog also recommended regulation of AI through sector-specific regulators and have advocated incorporating self-regulation in their respective framework. A decentralised approach ensures greater flexibility and supervision mechanism as sector specific regulators have better insight of use-cases applicable to their respective fields. Further, self-regulation can prove to be essential considering that regulators may not always be able to pre-empt different use-cases of AI, which may allow functioning of AI in a legal vacuum. Thus, adopting self-regulation along with AI specific guardrails may ensure that private players are able to design technology freely and yet act with accountability right from the development stage itself. Taking a step further, Meity also released a presentation on Digital India Act (“DIA”) in March 2023, that showcased the first major indication of the government’s serious intent to regulate high-risk AI through accountability and assessment mechanism. However, Meity recently faced industry backlash and was forced to withdraw an advisory requiring prior government approval for releasing certain AI products under testing in India.

While it is apparent that there are distinct approaches adopted globally for regulation of AI, we are yet to see what approach the Indian government will adopt. If recent actions such as the Meity AI advisories are any indication of what to expect, it appears that the government might adopt stern AI regulations, deterring development of AI in India, particularly since India’s technology growth is vulnerable to FDI flows, relative to China, the EU or the USA.

Then again, in the recent Global IndiaAI Summit 2024, development of ethical and transparent AI guidelines was announced as one of the key pillars of IndiaAI mission, emphasising on a pro-innovation approach. Thus, striking a balance to ensure innovation is not impacted while keeping consumer and ethical interests in mind is the call of the hour for India to deliver on its mission of becoming a developed country by 2047.

(Avimukt Dar is the Founding Partner; Raghav Muthanna is the Partner; and Himangini Mishra is Associate at INDUSLAW.)

(Disclaimer: Views expressed are personal and do not reflect the official position or policy of Financial Express Online. Reproducing this content without permission is prohibited.)