By Srivatsa Krishna

Continue reading this story with Financial Express premium subscription
Already a subscriber? Sign in

The world looks very different from Silicon Valley than it does from any other part of the world. There is much vibrancy and an intellectual buzz, which is the lifeblood fueling this part of the world. The big talk now is about how ChatGPT and its many successors are going to transform humanity. And the massive risks that come with it. 

Digital intelligence or artificial intelligence (AI), in any of its myriad forms, is getting smarter than us. And when a more intelligent entity controls a less intelligent entity, it doesn’t always augur well for the latter. AI will learn manipulation from us, get around obstacles better, and become far smarter at it. That is exactly where AI is going and, at some point, it could potentially overpower human intelligence—it already seems on the brink of this. Here lies the main threat to humanity.  

If you agree with the above, shouldn’t AI be regulated? When we regulate medicines, airplanes, food, money, and travel, why shouldn’t we regulate AI? 

Let us consider the example of GPT, an AI system trained on billions of pieces of unstructured data, which can predict the next word, the next sentence, and the next paragraph with scary accuracy. It has been trained on all kinds of data and information with no filters, which is why it often makes mistakes and throws out fake information. However, as the technology improves, it will cure itself of this and be even more accurate than it is today. Consider the following:

Example 1: Today’s versions of GPT can easily summarise emails, presentations, or any notes that might have been written. Consider giving it a command, such as “Summarise all emails sent by me to the chief secretary and chief minister of Karnataka in one page”. 

Example 2: Can you write code in Python to help scrape data from old, badly designed websites? It does it in minutes, giving researchers access to millions of unstructured pieces of information lying scattered across the web, using Python and BeautifulSoup.

Example 3: Takeda, the pharma giant, paid $4 billion to acquire Nimbus, which used AI to cut down the drug discovery time from years to months, to find a cure for an autoimmune disorder affecting millions of people globally. 

Example 4: A group of mystery developers who believe that humans are causing the greatest damage to humanity created ChaosGPT, another variant of a GPT large language model, which promised to find ways to eliminate humanity. It was mysteriously shut down soon after its launch. If efforts like these can weaponise with real-world deadly weapons, poison data, and put out fake videos and photographs that can stoke discontent and violence around the world, then these are not easily solvable problems. 

Example 5: Peek into the Dark Web or any of the well-known hacking forums, and there are examples of how ChatGPT is being tricked to write malicious code, create malware to take over computers for ransomware, and even an anonymous third-party API to use Bitcoin as a Dark Web payment mechanism.

When Google revealed Search for the first time, it fascinated everyone for it used machine learning and AI to predict human behaviour. In other words, it was able to take you exactly to the information, videos, or images that you were looking for, thereby accurately predicting what was on your mind. GPT takes it to the next level—it is not about prediction, but instead about generation. It can generate content, code, and whatever else you command it to do, which opens many new exciting as well as dangerous possibilities.

ChatGPT knows a thousand or more times more than any human in terms of general knowledge, but what is fascinating about it is that GPT models only have about a trillion connection strengths in their artificial neural networks, whereas humans have about 100 times of that. So, with 1/100th of the storage capacity of the human brain, it already knows thousands of times more than us. This strongly suggests that it’s got a better way of gathering, classifying, and understanding information than we do. On top,  unlike the human brain, machines can learn faster, for they have many copies—exact replicas—running in multiple hardware with neural networks in different parts of the world. These can easily exchange information and learn exponentially faster than the human brain. These neural networks can communicate billions and trillions of bits of data— which a human brain can never do—and thus, they learn faster. This is why GPT is far superior in terms of its learning ability.

What would be the broad outline of regulating a super-smart machine? First, and the easiest, would be to pass legislation in every country, making it mandatory to specify which images, videos, and content have been generated by using AI or GPT. This would make it, at the very least, identifiable in the eyes of a layman. The next part of the regulation would be tougher to do, which is to create a supranational body where governments come together and agree on common norms and laws to regulate AI. Apart from being hampered by the usual collective action and free-rider problems, this would also not be in the interest of many countries such as China, which might want to retain aggressive leads in AI made possible by the use of citizens’ data and, indeed, the data’s misuse, with impunity.

If the world does not agree, or at the minimum, if individual countries do not come up with a Digital Regulation Commission that agrees on the broad outlines of what should be done to regulate this super intelligence, then the evil side of AI will only be a race to the bottom.This will enable bad actors to take advantage of it faster than the good actors can prevent it. A Digital Regulation Commission should also mandate and enforce that the investement by the private sector in safeguards that prevent evil uses of AI is at least half of what is invested in improving AI manifold.

(Srivatsa Krishna, IAS officer. Views are personal.)