Electronics and information technology (IT) minister Ashwini Vaishnaw has struck the right balance on the need for regulating artificial intelligence (AI). The government, he said, is mulling over a regulatory framework for the technology, specifically with regard to the risks of algorithmic biases and the questions over copyright that will emerge. But the process would be unhurried. This is a more pragmatic approach as an appropriate AI regulation isn’t just one country’s issue—India alone can’t enact legislation and stay protected against the pitfalls of a free run for AI. The digital world erases the limits of political geography, and, therefore, controlling AI will need global coordination. To that end, the minister’s assertion that India’s regulatory framework will flow from international deliberations is well thought out.

So, even as the European Union is already debating a draft legislation to regulate AI—the EU AI Act—and the US is opting for more disaggregated oversight, distributed among different federal agencies, governments must work on targeting common threats through resemblant regulatory measures. Brookings fellow Alex Engler writes that while the “the EU and US strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards”, the specifics of these AI risk management regimes have more differences than similarities. Regarding many AI applications, especially those related to socio-economic processes and online platforms, the EU and US are on a path to significant misalignment. Perhaps India could use its G20 presidency to herd members towards greater cohesion in their AI strategies.

There are potentially many areas where runaway AI could lead to unintended, highly disruptive consequences. Jobs, for one, have come under its shadow. The World Economic Forum’s latest report on the future of jobs lists AI and other such technologies as one of the most significant disruptors of the global labour markets, even though it will drive the creation of specific kinds of jobs. Indeed, even Sam Altman, the CEO of OpenAI—the company that developed the generative AI, Chat GPT—believes it to be a threat to jobs. Altman, who appeared before a US Senate panel, told the lawmakers that one of his greatest fears from ChatGPT is the disruption that it would bring to the labour market and sought help from them to manage the impact. And, earlier this month, Geoffrey Hinton, widely seen as the godfather of AI, resigned from Google whose AI offering, Bard, is capable of carrying out programming and software development functions, including code generation, debugging and code explanation, in more than 20 programming languages. The British-Canadian cognitive psychologist and computer scientist said he regretted his work and some of the dangers of AI were “quite scary”.

This means AI regulatory frameworks are an imperative. More so, since countries like China, which recently announced draft regulations under which generative AI products will have to be registered with its cyberspace agency for, among other things, pre-release security assessments, are widely perceived to be unhesitating about using AI for strategic purposes as well. But any regulation for AI obviously needs private guardrails. As companies increasingly embed AI in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms—the private sector needs to come up with strong protective measures.