In run up to the upcoming Digital India Act, leading global software companies including Microsoft, IBM, Adobe, Amazon Web Services (AWS), Zoom, SAP, and others, have recommended development of a set of standard testing protocols for high-risk artificial intelligence (AI) systems.
The companies, represented by the software alliance group BSA, emphasised the necessity of a voluntary, market-driven, and consensus-based approach to develop and testing AI systems, particularly those considered high-risk.
BSA defines high-risk AI systems as those making consequential decisions affecting individuals’ eligibility and outcomes related to housing, employment, credit, education, public accommodation, healthcare, or insurance.
In addition to testing high-risk systems, BSA overall recommended policy recommendations for responsible AI use.
“BSA recommends that the Government of India take a whole-of-a-government and risk-based approach to AI governance, which will enable responsible innovation,” said Venkatesh Krishnamoorthy, country manager, India at BSA.
“BSA’s AI Policy Solutions provide actionable guidance to achieve these goals,” Krishnamoorthy added.
The alliance of AI and technology firms urged the government to consider implementing risk management programs with industry support, differentiate roles within the AI ecosystem and accordingly set rules and regulations, promoting transparency through methods like watermarks or other disclosure mechanisms for AI-generated content, pursuing international interoperability to develop a shared risk-based AI policy framework, and support AI training and education initiatives.
“Obligations should be placed on organisations based on their role in the AI ecosystem so that they can appropriately address the risks that fall within their responsibilities,” BSA said. It added that countries should work together to promote multi-stakeholder dialogue and develop a shared vision for a risk-based policy approach for addressing common AI challenges.
On March 1, the ministry of electronics and IT (MeitY) had issued an advisory to all intermediaries using AI models, software or algorithms, wherein it asked them to seek permission from the government and label their platforms as ‘under testing’ before making them available to the public.
Following industry criticism of screening requirements for large language models as regressive and something that will throttle innovation, the government removed the mandate to seek approval before launching untested or unreliable AI models in the country.
Recently, the government has advised platforms, particularly generative AI platforms like OpenAI and Google Gemini, against publicly releasing experimental variants without adequate disclaimers.
MeitY has been working on an omnibus Digital India Act to address such emerging issues, but has said in the interim the Information Technology Act and other similar laws will apply in all cases of user harm, which includes deepfakes.
The government is also expected to amend the IT Rules soon, likely incorporating measures such as watermarking and labelling to disclose details like source and creator of information generated by generative AI platforms.
The Telecommunication Engineering Centre (TEC) has released procedures for accessing and rating AI systems for fairness. The AI fairness score will assess bias in systems favoring specific sellers or products, although these TEC standards are not yet regulatory requirements for platforms.
