By Sameer Avasarala and Prashant Phillips

Artificial Intelligence (AI) has the potential to bring significant economic and societal benefits. The growth of AI can also support socially and environmentally beneficial outcomes for the community. This is especially true for sectors such as energy and environment, health, finance, marketing, technology and agriculture, apart from deployments in the context of national security or law enforcement investigations. Besides combining large datasets to comprehend patterns and features in analysis, its ability to measure and improve performance on every input of training and real-world data places AI distinguishably among technologies.

Global regulatory approaches

Law and regulation around AI emerge from balancing the risks of user harm with opportunities that AI presents. Ranging from European Union’s (EU) Regulation laying down rules on Artificial Intelligence (the AI Act), Singapore’s Personal Data Protection Act, 2012 (the PDPA) to China’s Administrative Guidelines for Generative Artificial Intelligence (AGGAI), laws and regulations around the world have adopted varying approaches to regulate AI and associated technologies.

The EU’s proposed AI Act aims to ‘present a regulatory approach limited to minimum necessary requirements addressing risks, without hindering technological development.’ Such an approach justifies graded compliance and therefore identifies certain ‘high-risk’ AI systems. Apart from notified bodies, the AI Act also proposes national authorities to coordinate AI regulation with a European Artificial Intelligence Board. For small-scale providers and startups which do not use or deploy high-risk AI systems, it proposes voluntary codes of conduct, while regulation of high-risk AI systems is subject to higher thresholds such as:

  • Establishment of risk management systems
  • Regulation of training and testing data
  • Record-keeping and technical documentation
  • Transparency and human oversight
  • Registration of such systems with the EU database

In contrast to the tiered and voluntary adoption model, the AGGAI proposed by the Cyberspace Administration of China provides that generative AI models (such as GPT) must undergo security assessments prior to public releases. A unifying factor amongst these approaches appears to be the presence of a robust and structured regulatory framework for regulating technologies cutting across sectors such as AI.

Indian Outlook on AI and Regulation

India is one of the fastest-growing markets for artificial intelligence and is expected to witness a growth of over 20% in the next five years, as estimated in a joint study by Bain, Microsoft and Internet and Mobile Association of India. This is particularly true for certain sectors such as communication, over-the-top platforms, gaming, technology and financial services, where the AI adoption rate is higher.

The Ministry of Electronics and Information Technology (MeitY) took up the subject of leveraging AI in 2018 by constituting four committees concerning platforms and data for AI, leveraging it in national missions, mapping technological capabilities and on cybersecurity, safety, legal and ethical issues, which submitted their reports in July 2019.

In response to a question in Parliament, the government has indicated that a comprehensive framework for AI is not proposed. Instead, harnessing AI’s potential for a kinetic effect on the growth of innovation and development of business and entrepreneurship is desired while standardizing responsible AI, deployment in personalized and interactive citizen-centric services and promoting best practices.

The NITI Aayog released the National Strategy for Artificial Intelligence which identified focal areas for AI intervention such as healthcare, education, agriculture and smart cities, mobility and transport and key challenges for its adoption. It also discussed ethical, privacy and security challenges associated with AI. In this regard, certain principles were postulated for the responsible management of AI systems:

  • Principle of Safety and Reliability which proposes the presence of adequate safeguards, risk minimization and grievance and compensation structures for unintended consequences and periodic monitoring of AI systems.
  • Principle of Equality to ensure that AI systems treat all individuals under the same circumstances relevant to the decision equally.
  • Principle of Inclusivity and Non-discrimination, prohibiting discrimination in any form or denial of opportunity based on identity. AI systems must not deepen harmful historic and social divisions based on race, caste or identity, or unfairly exclude any services or benefits. Any unintended adverse events must be countered by human oversight, accountability and affordable and accessible grievance redressal by users.
  • Principle of Privacy & Security of data of individuals used for training and/or in real-world deployments, with access control and other safeguards applied to such data.
  • Principle of Transparency, with the design and functioning of AI systems made available for external scrutiny and fair, impartial and periodic audits of such systems.
  • Principle of Accountability for actions of stakeholders involved in the design, development and deployment of AI systems, implementing risk and impact assessments and periodic audit processes for adherence to key principles.
  • Principle of protection and reinforcement of positive human values for promoting positive human values and refraining from disturbing social harmony and community relationships.

The lack of a comprehensive framework for artificial intelligence or concerning data protection is discerning. While the Information Technology Act, 2000 (IT Act) provides rules governing the processing of sensitive personal data or information, it is limited in application and remains technology-agnostic.

The Information Technology Rules, 2011 does not deal with automated processing or decisions based on such processing. This is also absent in the Digital Personal Data Protection Bill, 2022 which does not extensively deal with the processing of personal data using AI-based tools or provide for rights associated with automated processing such as the right not to be subject to a decision based solely on automated processing or the right of portability.

While varied approaches exist to regulating AI, from ‘control’ to ‘effects’, primacy must be accorded to protect users and stakeholders from potential harms that may arise from AI systems. A balanced regulatory approach must be evolved to ensure that the prerogative to protect users and stakeholders does not stifle innovation, hamper the growth of AI or associated technologies or in turn, impact the digital economy.

The authors are senior associate and partner at Lakshmikumaran & Sridharan, respectively

Follow us on TwitterInstagramLinkedIn, Facebook