By Soumya Kanti Ghosh & Bikramjit Chaudhuri,
Chat GPT (Generative Pre-Trained Transformer) and the prospect of Artificial General Intelligence (AGI) have burst upon an astonished world. The ones delighted by this are perhaps those transfixed by discovering that a machine could apparently carry out a written commission competently. The logical corollaries are the fears of redundancy on the part of people whose employment requires the ability to write workmanlike prose. And the lamentations are coming from earnest folks (many of them teachers at various levels) whose day jobs involve grading essays written by students!
Before pressing the panic button, though, it is worth examining the nature of the beast. Chat GPT models are primarily open source AI models, implying being available online. The GPT models are also referred to as large language models (LLMs) trained with a vast corpus of human-generated text, mostly scraped from the web. So you could say that it has read and ingested almost everything published online. As a result, Chat GPT is pretty adept at mimicking human language, a facility that has encouraged many of its users to view the system as more human-like than machine-like.
So far, so predictable. If we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating long-term implications. Only two years ago, the next big things were crypto/Web 3.0 and quantum computing. The former has collapsed under the weight of its own absurdity, while the latter is, like nuclear fusion, still just over the horizon! And we suspect, we may have just jumped the gun too fast on GPT as well.
However, the pandemic may have hastened the move towards GPT as labour markets across economies witnessed maximum disruptions and prolonged absence of workforce. Subsequently, GPT turned out to be useful in lots of applications.
Even as GPT is rewriting the rules of the game, we must appreciate that such models are ultimately assistant tools that merely augment human capabilities. It creates a fatalistic compliance among humans who suffer technology’s damaging effects; though “AI” cannot plagiarise human artistic creations—only other humans can. But GPT is here to stay.
If this is so, who is going to benefit? And who will be left behind? There are two contrasting ways it can evolve.
First, the good part. As businesses scramble for ways to use the technology, we must put in place a system of effective regulations so that AI becomes a tool for the masses and is not concentrated among a few large tech companies. Training LLMs from scratch is significantly economically prohibitive for the proliferation of AI across the technology domain for smaller corporations. For example, training state-of-the-art models like GPT-4 costs more than $100 million.
The first regulation could be putting in place a system to make accessible the production of semiconductor chips controlling access to computing hardware to train powerful AI models. The production of such chips is limited to a few key countries like the Netherlands, Japan, and the United States, constituting 90% of global semiconductor market. This will also reduce costs significantly.
Semiconductors, used in most digital devices like smartphones, cars and so on, have revolutionised the way of living of mankind. In the same vein, AI models must be used as a greater public good like smartphones that improves the capabilities and expertise of humans, providing a boost to the overall economy. Against this background, with multiple disruptive developments taking place in chips manufacturing and supply-chain globally, where the dominance of China is being curtailed through collaborative arrangements, the strategic alliance between India and the US, as was stitched during the recent visit of the prime minister, is of paramount importance for India to get a strong foothold.
The second regulation can be like the EU’s Artificial Intelligence Act that aims to strike a careful balance between promoting innovation and protecting users’ rights; this has already been approved by EU Parliament. Other countries are also expected to take cue from this regulation and come up with their own domestic laws regulating AI.
The third regulation—but not limited to AI—is the regulation of entities outside the banking system that compete with banks, offering “banking type services” that extensively use AI. Leveraging payment processing, credit risk assessment, person-to-person payment systems, merchant acquiring and BNPL models and already 100% digital with millions of customers on roll—such non-banks are already having a head start.
Now, the not-so-good part. The other way is to allow AI to flourish to AGI independently and transition to an era where only a few large techs control it. A few high-tech companies and tech elites will get even richer, but may do precious little for overall economic growth. AI can then make the already troubling income and wealth inequality in many countries even worse. Interestingly, independent reports suggest that such transitioning is reminiscent of possessing exclusive rights over uranium in 1938 by a few select countries, resulting in only such countries developing nuclear capabilities! Let us allow the global AI governance to evolve over time. Regulations must adapt with technology. Only this can ensure that AI becomes a truly public good and a global strategic asset for betterment of mankind.
The authors are Respectively, group chief economic advisor, State Bank of India, and senior vice president, Datamatics Global Services. Views are personal