By Shivaji Dasgupta,

Quite like the Godfather series in cinema, Parmy Olson’s blockbuster is best enjoyed from the middle of the narrative. The tantalising journey of Sam Altman (OpenAI) and Demis Hassabis (DeepMind) in the quest to forge the all-powerful world order of AI; provocative linkages with Big Tech, Google and Microsoft that pitch ethical intensity and societal outcomes with the inevitability of billion buck business windfalls.

Olson cracks an admirable sweet spot; the contrasting world views in the evolution of AI, especially generative, are mirrored by the differing approaches of Altman and Hassabis. Altman, nestled in the cradle of American enterprise, is obsessed with tangible operational derivatives. Hassabis, a scholastic chess whiz, seems to fish for a Nobel Prize or so in the quest for jaw-breaking tech pedagogy.

DeepMind is in cahoots with the Alphabet gang, Google, deeply suspicious of a super intelligent economy system that would challenge its exploratory monopoly. Hassabis wants it to be like the United Nations, unifying the universe for the common good. So it tries to win the faith of China in a brand new edition of ping pong diplomacy by pitching its AI programme AlphaGo, an exponent of Chinese board game Go, against teenage champion Ke Jie. Ultimately such gambits fail, as China is steadfast in its quest to surpass the US in AI development—the new-age arms race for tomorrow.

Ethical AI is brought to the fore on many occasions. In 1911, Standard Oil controlled 90% of the world’s oil reserves while Google was the 92% stakeholder of its domain. Its ability to be an agent of evil often overturns its stated motto of striving for good. The Rohingya issue in Burma is an unfortunate use case, amongst significant others.

Timnit Gebru is projected as a crusader for the ethical agenda with a special focus on gender bias and racism. COMPAS, an AI-based criminal detection system in the US, unfairly profiles Black offenders as more likely for repeat offences as opposed to the fair skinned of the species. Google’s departure from Project Maven, in association with the military, further illustrates the confusions and conflicts in this dynamic coliseum.

Google and Microsoft are projected to be too big for innovation, and shades of Clayton Christiansen, the man behind the idea of disruptive innovation, seem to be coming to the party. Transformer, the T in GPT, was invented by Google, but its potential was unleashed by folks like DeepMind and Open AI. This is indeed a continuing pattern across the civilisation of business, as giants are insecure about change.

A fascinating interplay is the magnanimity versus dictatorship agendas of Hassabis and Altman, at least in their original avatars. OpenAI, true to its name, advocating openness and sharing, while DeepMind believing in a more controlled architecture in the journey to evolve a language and not text-oriented protocol, the latter being the comfort zone of the giants. GPT’s unique differentiator is the learning from text that was not labelled, so the ability to be creative and versatile. Quite like human brains, it is conditioned for diversity in inputs, leading to a convergence of output.

Altman and Hassabis’s formative years, which form the first part of the book, are anecdotally interesting but devoid of any significant provocative pattern.

The parts about Satya Nadella’s (Microsoft) investment in OpenAI are deeply insightful. Vendor lock-in, the strategic endeavour of tech giants to build exit barriers for clients, was a key driver for the Seattle giant’s exceptional affections for the generative AI trailblazer. A similar smart business mindset determines Google’s interest in DeepMind, the authoritarian wishlist to entrench its pole position. Charmingly, or perhaps alarmingly, this is in contrast to the statesman-like purpose systems of Altman and Hassabis, more Darwinian than Wall Street.

Even as DeepMind was penning its intent to be a ‘capped profit company’, the alignment to Big Tech was inevitable, for cash flows, solvency and access to accelerating springboard. Microsoft, for instance, built a supercomputer for OpenAI’s training process with 2,85,000 CPU cores—a tank compared to the toy car that is a single personal computer. Twitter and Reddit would be the biggest sources of learning, the latter contributing 10-30% of text for GPT4, courtesy its treasury of live consumer bytes. In all this, OpenAI’s universal access would continue to contrast deeply with the controlled regime of DeepMind, the jury out on which would be the greater evil.

The ethical dilemmas continue unabated as the collusion between creativity and domination seems inevitable. When DeepMind attempts a healthcare app in the UK, the media is agog with rumours that precious private patient data was being leaked to Google. A fascinating character that adds vanaspati to the tidings is David Amodei, a key actor of the OpenAI tech story. He is constantly questioning the seemingly ulterior motives of Altman and Co, business versus purpose, and he moves on to set up his own company Anthropic. Which, in turn, eventually raises money from Google and Amazon to fund its growth strategy. Pragmatism, and not hypocrisy, would be a kind assessment.

Quite beautifully, Olson moves on to seemingly softer but truthfully, deeply valuable concerns. On how humans of our times, you and us, think and feel about AI. LAMDA’ s prolific sensitivity inducing a researcher to believe that there was a ghost in the machine. Xiaoice, with 600 million followers in China, engages youth in romantic conversations. Replika, in the US and Europe, being an all-purpose companion to many. Toxic content becoming an inevitability as 60% of data of ChatGPT-3 is derived from Common Crawl, mostly legitimate but also including insensitive sources. The RLHF model—Reinforcement Learning from Human Feedback—is perhaps an antidote, where physical verification is introduced.

As the race for Supremacy gets heated, in the new Wild West of human endeavour, much else is happening. Microsoft launched the GitHub Copilot, wherein software can be co-created by non-specialists, whether threat or opportunity. DALL-E 2, the imagery hub, is intimidating real-time artists with its acumen. Google DeepMind becomes a combined entity, with David as the boss. While the intrigues of Altman’s sacking from OpenAI, conflict of business and purpose, is eventually corrected by Nadella’s intervention.

Doomsday evangelists are top of the pops. Eliezer Yudkowsky’s article in Time (2023) predicts that AI will lead to the destruction of the human race, amongst other ballistic arguments. Even hustler Elon Musk recommended a six-month pause to AI research, a petition with 34,000 signatories. As many as 22% of Americans felt that AI could spell doom.

Effective altruism emerges as a scalable theory, encouraging the able to get seriously wealthy in order to positively impact the poor. Meanwhile, the World Economic Forum seriously believes that this could enhance, not substitute, the capabilities of creative folks. As perceptual damage control, Altman goes to Davos and talks in pacifist terms — changing the world ‘less’ and changing ‘jobs’ less. A necessary yet poignant role reversal.

This book is utterly brilliant for multiple reasons, as it captures a rapidly evolving moment in time that could shape destinies and evolution. A hybrid of a Netflix web series and a Doomsday scroll, Supremacy will surely make you the monarch of opinions in this seductive space.

The author is an autonomous brand consultant and writer.

Title: Supremacy: AI, ChatGPT, and the Race That Will Change the World

Author: Parmy Olson

Publisher: Pan Macmillan

Number of pages: 304

Price: Rs 899

Read Next