Role of generative AI in India

Developers should guard against “hallucinations” caused by AI

By Sajai Singh, Parvathy Manoj, Riya Saraf

Generative AI, spanning chatbots and content generation, holds immense potential for industry transformation. In India, the legal landscape for its use is complex and evolving. Despite the AI revolution, there’s no specific law for its regulation. The government has provided ethical guidelines but lacks legal enforcement. Regulation under the Digital India Act is proposed but lacks a clear enforcement timeline. This article addresses publisher liability, content restrictions, intellectual property, data privacy, and other legal aspects in this emerging field.

Publisher Liability and the Safe Harbor

One of the thorniest issues with generative AI is the potential legal liability faced by those who provide content generated by AI systems. In India, there’s a promising escape route for these providers: they can argue that they operate as intermediaries and seek refuge in safe harbor protections. To secure these protections, intermediaries must refrain from initiating content transmission, avoid selecting recipients, and steer clear of any alterations to the information being conveyed. Moreover, they must meticulously follow the provisions of the Information Technology Act (IT Act) and the Intermediary Guidelines.

Content Restrictions and the Role of AI Providers

While explicit restrictions on AI-generated content are lacking, implicit principles apply. Content involving obscenity, privacy invasion, discrimination, harassment, or promoting violence and hatred is likely prohibited. Developers must anticipate liability for inaccurate or biased content harming users under the IT Act.

Developers should guard against “hallucinations” caused by AI, arising from disreputable sources that yield false, illogical outputs. Ensuring AI accuracy, reliability, and thorough content vetting is crucial. Developers should also remain cognizant of the licensing terms of the open-source software used by their AI systems, as non-compliance of licensing terms may lead to breach of contract and termination of the open-source license.

Intellectual Property

Generative AI brings intellectual property concerns into play, particularly regarding copyright law. Under the Copyright Act, 1957, copyright is granted to original works involving a degree of creativity, not just skill and labor. However, AI-generated output often combines existing sources, potentially lacking the requisite human creativity. While some argue that human involvement is not the sole measure of creativity, legal precedents paint a mixed picture.

The Copyright Act acknowledges the authorship of computer-generated works but remains unclear on who qualifies as the “person” causing the work’s creation. It could be either the AI tool developer or the user inputting queries. Since AI lacks legal personhood in India, it may be difficult to consider the AI as the author. As AI tools often rely on copyrighted data, there may be potential copyright infringement claims. However, Indian law presents some hurdles for such claims. Firstly, the Copyright Act requires the infringing party to be a “person,” which AI tools are not. Even if the developer is considered the infringing party, the Copyright Act provides a defense for “fair dealing” with copyrighted work. Determination of fair use depends on factors like the nature of the work, extent of infringement, and the purpose of use. Developers may also take the “transformative use” defense if the AI-generated work is substantially different from the original.

The lack of copyright protection to AI-generated work also may have an impact on the advertising industry. Advertisers may not have ownership over the works they create with use of AI systems. Furthermore, transferring ownership of AI-generated content to their clients also poses an issue as the advertising agencies themselves would not be deemed as the rightful owners under the current laws.

Data Privacy and Consumer Protection

Since AI systems rely on huge amounts of data, some of which may contain personal data of individuals, organisations using and developing AI systems must ensure that they are compliant with applicable data privacy laws. Organisations should ensure that they are processing the personal data of individuals on legal bases and that appropriate security measures and practices are in place to protect user data and prevent data breaches. In light of the Digital Personal Data Protection Act, 2023, organisations using AI systems should remain cognizant of the compliances and align the use of their AI systems accordingly. Caution is advised, as potential consumer protection claims may arise when using AI with customer data for business advantage, possibly violating the Consumer Protection Act, 2019.

Developers should secure adequate insurance coverage, such as cyber insurance, to protect against such legal risks, including first-party and third-party liabilities and cybersecurity expenses in India. In the developing regulatory regime in India, balancing innovation, freedom of expression, and responsible AI use is a delicate tightrope walk that must be addressed as this field continues to advance.

The authors are partner, J. Sagar Associates, associates, J. Sagar Associates, respectively

Follow us on TwitterFacebookLinkedIn

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on October two, twenty twenty-three, at forty-five minutes past one in the afternoon.
Market Data
Market Data