By Atanu Biswas

Continue reading this story with Financial Express premium subscription
Already a subscriber? Sign in

Pause Giant AI Experiments: amid the ongoing AI gold rush, The Future of Life Institute published an open letter in March with more than 1,800 signatories, including Elon Musk, cognitive scientist Gary Marcus, author and historian Yuval Noah Harari, and Apple co-founder Steve Wozniak, asking for a six-month moratorium on the development of systems more powerful than GPT-4. Undoubtedly, generative AI and many of its incarnations like ChatGPT or Midjourney are not only spreading magic but also increasing the likelihood of misinformation, bias, and fake news in society. Simultaneously, “contemporary AI systems are now becoming human-competitive at general tasks,” the letter stated.

Yuval Noah Harari provided an enthralling critique of Terminator-style situations in an opinion piece for the New York Times: “Soon we will also find ourselves living inside the hallucinations of non-human intelligence.” On the other hand, in a recent article headlined The age of A.I. has begun, Bill Gates claimed he had witnessed two transformative technologies in his life: the first was the graphical user interface such as mobile phones and the internet, and the second was generative AI. So, what should we do when it seems like generative AI is pushing the boundaries?

Also Read: Adding AI to the Grid: A Case for Power Grid Operations

Many of the letter’s signatories, like AI expert Rafe Brena, joined the petition since they agreed that the “spirit” is right even though the body contains many flaws. Gary Marcus, a professor at New York University, for instance, stated, “The letter is not perfect, but the spirit is exactly right.” Another signatory, Adam Frank, an astrophysics professor at the University of Rochester, says, “We do not need to pause AI research. But we do need a pause on the public release of these tools until we can determine how to deal with them.”

The letter undoubtedly sparked a bigger discussion about whether AI development should even be put on hold. Twelve research articles from various experts were cited in the open letter. However, some authors of those papers slammed the letter that cited their research for fearmongering. For example, Margaret Mitchell, who previously oversaw ethical AI research at Google and is currently the head ethical scientist at the AI company Hugging Face and also the co-author of one of these articles—a March 2021 AI research paper titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? —wondered what constituted being “more powerful than GPT-4.” In a different yet related reaction, former Google CEO Eric Schmidt claimed that such a pause would benefit Chinese competitors in the market.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said. The US Commerce Department has now formally announced that it is seeking public input on how to develop AI accountability metrics in order to provide guidance to US officials on how to approach the technology. But would such an initiative be sufficient to regulate the usage and development of AI?

Might an AI pause be disastrous for innovation? Pedro Domingos of the University of Washington and the author of the 2015 book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World is one such expert who is vehemently opposed to the proposal for a six-month moratorium on the development of advanced AI. “The AI moratorium letter was an April Fools’ joke that came out a few days early due to a glitch,” tweeted Domingos. He believes the level of urgency and alarm about existential risk expressed in the letter is completely disproportionate to the capability of current AI systems. Domingos is shocked and disappointed that genuine AI experts—many of them the signatories of the letter—have mistaken that. Moreover, it’s impossible that a group of AI experts could work with regulators over a six-month period to mitigate threats like these and ensure that AI is henceforth safe beyond a reasonable doubt. It couldn’t be accomplished for the internet and the web in excess of 50 and 30 years, respectively. In reality, we have not reached a consensus on how to control them. And can we halt their development or shut them down?

Also Read: Dunzo to collab with SHIELD, a global risk intelligence platform for improving cyber security

Yann LeCun, a 2018 Turing Award recipient and sometimes referred to as one of the three godfathers of AI thinks AI can bring about a renaissance. LeCun shared his initial reaction upon hearing about the open letter: “Why slow down the progress of knowledge and science?” It’s interesting to note that in LeCun’s 2019 Scientific American article Don’t Fear the Terminator, he referred to the AI apocalypse as being implausible.

“The year is 1440, and the Catholic Church has called for a six months moratorium on the use of the printing press and movable type. So, imagine what could happen if commoners get access to books,” asks LeCun. Well, can AI research be paused? Maybe not. AI is humanity’s response to an increasingly complicated physical and global culture.

Do all of the signatories to the pause AI petition even think a pause is feasible? I’m not sure. However, according to media reports, Elon Musk, one of the most well-known signatories, has now created a new artificial intelligence company called X.AI Corp. that is incorporated in Nevada. This would certainly compete with companies like OpenAI as Silicon Valley battles for dominance in the rapidly developing technology. Yann LeCun has nothing to worry about—without a doubt, the AI-driven renaissance will continue. The question of caging the stochastic parrot would keep haunting society, though.

The author is Professor of statistics, Indian Statistical Institute, Kolkata