Self-regulation in AI models on the cards

The TEC has recommended a risk-based approach to evaluate robustness of AI systems, which is similar to the European Union Artificial Intelligence Act.

artificial intelligence, AI, technologies, digital transformation, MeitY, Telecom Engineering Centre
MeitY approved eight responsible AI projects for safe and trusted AI. (Image/PTI)

Firms deploying artificial intelligence (AI) technologies having consumer interface will need to either self certify their models so as not to cause any harm to users, or get the same done by third party agencies. Sources said that the government will not be getting into checking either the robustness or safety of any use cases developed using such technologies. Instead, it will broadly lay down some standards like reliability, explainability, transparency, privacy, and security, against which the firms will be required to test their AI models. Such a move would not deter innovation but adopt a light touch to regulatory approach. 

Currently, the Telecom Engineering Centre (TEC), the ministry of electronics and IT (MeitY) and industry stakeholders are, in the process of developing such standards on the basis of which a self-test or a third-party audit can be conducted by companies working on large language models (LLMs). These models could be related to connected cars, drones, metaverse, and healthcare systems.

Officials said for critical AI applications like self-driving cars, medical diagnosis, autonomous aircraft among others, the sector regulators may mandate tolerance levels as benchmarks to pave the way for the use of AI technology in the applications.

The TEC recently issued a draft consultation paper on robustness assessment and rating of AI systems, which involves standards on which the AI systems can be rated and checked for security and safety. The draft, which is open for comments till December 15, has been prepared in consultation with MeitY officials and technology companies, officials said. A copy of the draft was seen by FE.

As per the draft, AI robustness has been defined as the degree to which an AI system maintains its functional correctness and remains insensitive to specific adversarial phenomena in the data, model, human in the loop, integration or interfaces or deployment environment. The TEC has recommended a risk-based approach to evaluate robustness of AI systems, which is similar to the European Union Artificial Intelligence Act.

While prescribing qualitative standards and norms that can be followed by the companies to check their systems before production, the consultation paper recommends a three-tier ranking system – high risk, medium risk, and low risk –  for AI robustness based on parameters such as scope of AI systems, vulnerabilities, purpose for deployment, etc.

The approach, which the government is following on AI regulations is different from its earlier approach wherein MeitY issued an advisory to all intermediaries using AI models, software or algorithms and asked them to seek permission from the government and label their platforms as ‘under testing’ before making them available to the public.

Later, MeitY had removed the requirement of seeking the government’s nod before launching any untested or unreliable AI models in the country. Recently, MeitY approved eight responsible AI projects for safe and trusted AI. These projects include AI governance testing framework, algorithm auditing tools, AI ethical certification framework among others.

Get live Share Market updates, Stock Market Quotes, and the latest India News
This article was first uploaded on November twenty-three, twenty twenty-four, at fifteen minutes past five in the morning.
X