“I foresee, by 2030, much more AI just folded into our lives. But the AI that transcends all of our intelligence… I’m old enough to have lived through a lot of the Cold War” – this is what renowned astrophysicist and science communicator Neil deGrasse Tyson shared in a recent podcast that demanded his views on the future of superintelligent AI.
Neil urged the global community to help the development of superintelligent AI before things spiral out of control. Speaking at the 2026 Isaac Asimov Memorial Debate, he initially acknowledged the immense benefits that AI is already bringing to society, i.e., from breakthroughs in medicine and physiology to technologies that enhance human health and intelligence. However, he also drew a line at the pursuit of superintelligence.
In his conversation, he referenced the race of superintelligent AI to the Cold War era, and how the concept of Mutual Assured Destruction (MAD) eventually forced world leaders to negotiate arms control. He called for authorities to establish treaties that control the development of AI.
Neil deGrasse Tyson warns about superintelligent AI race
“People will come to the table and say, yeah, keep the rest of the AI going. We got new medicines and new understandings of our physiology and new technologies that help us get smarter and healthier. But that branch of AI is lethal,” he warned. “No one should build it, and everyone needs to agree to that by treaty. Treaties are not perfect, but the best we have is humans,” he added.
Neil’s comments come at a time when there’s a stream of rapid advancements in AI by companies like OpenAI, Anthropic, Google, and xAI, all of which have publicly stated ambitions to achieve artificial general intelligence and eventually superintelligence.
The popular astrophysicist also highlighted that beneficial AI applications, like medicine research and physiology, should continue to flourish as the current pace. However, he clarified that the specific “lethal” branch capable of outsmarting humanity must be stopped through international agreement.
Neil also brought the Cold War analogy to warn AI megapowers to keep a check on the AI development and let go of the pursuit of superintelligence for the sake of it. A superintelligent AI shouldn’t pose a risk to humanity’s existence.
Global call on control over AI development
Tyson’s warning arrives at a time of growing global debate over AI safety. Some of the recent developments include concerns over autonomous weapons systems, where Anthropic stepped back from collaborations with the Pentagon due to fears of AI making life-or-death decisions without human oversight. While the US Department of Defense has maintained that it will only deploy AI in lawful and controlled ways, critics remain divided on whether meaningful safeguards can keep pace with innovation.
Tyson’s call for a binding international treaty to ban superintelligence development has sparked intense discussion across scientific, tech, and policy circles.
