By Ravi Singh
As the dust settles on the recent OpenAI upheaval, there’s a silver lining for its CEO, Sam Altman. He has not only reclaimed his leadership role but has also curated a board to his preference. Joining the board is the former US Treasury Secretary, Larry Summers and Bret Taylor, former co-CEO of Sales Force. Microsoft is expected to get at least an ‘observer’ spot. This bodes well for the swift advancement of ChatGPT, but the episode deserves a more scrutinous lens as it hints at deeper undercurrents.
It has come to light that disagreements between Altman and the board centred on the speed of AI development, particularly about the swift commercial release of new ChatGPT iterations without extensive risk analysis. Altman and his camp advocate for the expedited development and public release of AI, arguing it’s critical for the real-world testing and refinement of the technology. In contrast, some board members insisted on a more deliberate approach, advocating for comprehensive development and testing within the confines of a lab to guarantee the AI’s readiness and safety for public application.
Internal communications from OpenAI also point to a project named Q* (Q-star), believed by some within to be a leap towards artificial general intelligence (AGI)—an AI system surpassing human capability in economically valuable tasks. Q* might be a big leap in AI, promising proficiency in solving complex mathematical problems, a significant upgrade from the current generative AI that excels in language but struggles with the definitive nature of mathematics. The mastery of mathematics signifies a stride towards human-like reasoning, potentially unlocking doors to unprecedented scientific discovery.
The debate rages on among tech experts about the perils of ultra-intelligent machines, including fears of them deeming human annihilation beneficial. Conversely, some advocate for rapid AI development, downplaying imminent threats given the distance from achieving true AGI. Yet, history reminds us that technological progress isn’t always benign; noble intentions can derail and spawn misuse.
Consider the 20th-century eugenics crusade that, despite its initial goals of genetic enhancement, spiralled into unethical forced sterilizations and fuelled Nazi racial doctrines. Or the early misuse of X-rays, leading to dire health consequences. Similarly, the advent of nuclear weapons, while a scientific milestone, sparked a perilous arms race and inflicted enduring environmental and health damages.
Also read: Sam Altman fires entire OpenAI Board that sacked him. Only exception – Quora’s Adam D’Angelo
We often fail to grasp the long-term implications of technology in its infancy. Over the span of decades, even the simplest innovations can have widespread effects, touching the lives of even those indifferent to tech advances. Consider the local sweet shop owner in Chandni Chowk back in 1998, who likely didn’t envision how Google’s search engine would one day influence his livelihood. Now, a prominent listing on Google or customer reviews can make or break his business. In a similar vein, social media’s initial role as a connector has evolved, with its documented negative impacts on teen mental health. This underscores the necessity of advancing innovation with caution and the need for suitable regulatory frameworks.
In the midst of this technological surge, Elon Musk’s company xAI has unveiled Grok, its own brand of generative AI. Microsoft is equally intent on enhancing Bard, its AI counterpart. The tech giants are in a fervent quest to outdo each other, akin to an ‘AI arms race.’ While they race to the forefront of innovation, it is critical to maintain vigilant oversight to avoid unleashing forces beyond our comprehension or control. While fostering innovation is vital, it must not proceed unchecked, lest we invite unintended consequences that outweigh the benefits.
Another aspect which is not given enough attention is the substantial computing power demanded by tasks such as processing huge datasets and executing intricate algorithms, which typically rely on power-intensive data centres. As AI applications proliferate, so does their carbon footprint, potentially undermining global efforts to reduce emissions and combat climate change. The irony is stark – while AI offers tools for environmental monitoring and climate prediction, its own carbon cost poses a substantial environmental challenge.
Also read: OpenAI, Steve Altman drama: 5 founders who were fired from their own tech companies
The imprint of innovation and scientific progress on enhancing our lives is undeniable, with breakthroughs like the internet improving communication and vaccines saving millions from disease. These advancements should unquestionably be fostered. Yet, as we charge forward, it is imperative to establish reasonable checks and balances. This ensures that our strides in innovation are sustainable and yield benefits that far outweigh any potential harm. A future where technology uplifts without adverse repercussions is not just an ideal but a necessity; hence, the marriage of progress with responsibility is the cornerstone of a better, safer tomorrow.
(Ravi Singh is an author and an IRS officer. Views are personal.)