A former OpenAI researcher has warned that rapid advances in artificial intelligence could pose an existential threat to humanity as soon as within the next five years, especially if safety measures fail to keep pace with development. Daniel Kokotajlo, who previously worked on AI safety at OpenAI before becoming a whistleblower, made the alarming prediction during an appearance on The Daily Show

He stated that there is a 70% chance of “all humans dead or something similarly bad” if current trends continue unchecked. When asked for clarification, Kokotajlo replied, “Correct. Extinction.”

Kokotajlo, associated with The Futures Project, stated that the danger is not decades away but far more imminent. “The pace of AI progress is going to be fast, and it’s going to accelerate dramatically,” he said. “I would guess something more like five years.”

Progress going to make AI go out of control

According to Kokotajlo, one of the core challenges is AI alignment, i.e., ensuring that advanced systems adopt goals and values aligned with human interests. Researchers are still struggling to solve this fundamental problem. As AI systems become more powerful, they could grow increasingly independent, which could potentially lead to millions of superintelligent AIs capable of building self-sustaining robot-operated factories without human involvement.

Another major concern, he highlighted, is the deep integration of AI into critical infrastructure, including defence and military networks. Once embedded, shutting down or controlling such systems could become extremely difficult, if not impossible.

Kokotajlo also pointed to intense industry competition as a key risk factor. The race to develop more capable models is pushing companies to prioritise speed over robust safety protocols, increasing the likelihood of dangerous shortcuts.

Is AI dangerous for humanity?

Kokotajlo’s warnings just add to a growing chorus of concern from AI experts, who convey similar fears about superintelligent AI leading to human extinction. Some of the prominent names include Geoffrey Hinton, Yoshua Bengio, and even some current industry leaders like Anthropic CEO Dario Amodei.

On the other hand, many in the tech sector highlight AI’s potential to revolutionise medicine, science, and productivity. Kokotajlo’s comments only establish the urgent need for stronger safety research, international cooperation, and responsible development practices.

OpenAI has not publicly responded to Kokotajlo’s latest statements yet. The former researcher’s departure and whistleblower status reflect ongoing internal debates within leading AI labs that talk about balancing rapid innovation with existential risk mitigation.