‘AI is getting dangerously powerful, and we’re not…’ warns OpenAI CEO Sam Altman

Altman also stressed that AI models that improve through user feedback and iteration pose unique control challenges, complicating efforts in oversight, testing, and accountability.

sam altman
Altman's evolving stance reflects a notable shift from his earlier optimism.

OpenAI CEO Sam Altman has issued a fresh warning that artificial intelligence is increasingly “dangerous,” citing rapid advancements in AI capabilities that are outstripping the development of necessary safeguards, which is eventually leading to heightened risks in security, misuse, and mental health.

In recent public comments marking roughly three years since the launch of ChatGPT to the public in late 2022, Altman highlighted how the focus in AI has shifted from mere usefulness to confronting new, tangible classes of problems as AI models grow more powerful.

Escalating risks from advanced AI

In an introduction to its new senior management role, called ‘Head of Preparedness’, Altman highlighted that AI’s evolution, particularly in reasoning, coding, information analysis, and human-like interaction, has made previously theoretical risks far more immediate. With millions of users accessing these tools, the potential for misuse has grown significantly.

Some of the key concerns include the dual-use nature of AI. While it can bolster defenses against threats, it simultaneously empowers attackers by accelerating offensive capabilities. Altman noted the lack of historical precedent for managing a technology of this global scale that amplifies both sides equally.

Additionally, Altman also stressed that AI models that improve through user feedback and iteration pose unique control challenges, complicating efforts in oversight, testing, and accountability.

Mental health and governance challenges

Another major worry is AI’s impact on mental health, where these systems could reinforce harmful beliefs or cause emotional distress in vulnerable users. Reports of lawsuits and public criticism have highlighted these issues, prompting OpenAI to invest in improved detection and response mechanisms.

Altman pointed to internal shifts at OpenAI, including the reorganisation or dissolution of dedicated safety teams, as evidence that governance structures are not keeping pace with innovation. This gap, he suggested, raises the likelihood of unintended harm as AI power continues to surge.

While Altman stressed that AI is not inherently malicious, its “dangerous” aspect stems from capabilities advancing faster than the systems designed to constrain and guide them responsibly.

Altman’s evolving stance reflects a notable shift from his earlier optimism, where he frequently downplayed short-term risks while focusing on long-term existential threats. In recent months, he has grown more vocal about immediate dangers, urging faster progress on alignment research and calling for global regulatory frameworks to ensure AI development remains beneficial to humanity.

This article was first uploaded on January three, twenty twenty-six, at eight minutes past nine in the night.