Amidst all the responses from the crowd that disliked GPT-5’s undesirable responses, OpenAI CEO Sam Altman was concerned with all those who claimed to have formed an emotional bond with the previous generation GPT-4o model. A lot of users confided in GPT-4o, considering it as one of their close friends, while a few even compared it to their ‘digital wife’! Altman has now shared a post on X, expressing his concerns about this behaviour.
Altman dived into the complex ethical challenges facing the AI industry, revealing a deeper concern about the emotional attachment users are forming with AI and the subtle risks of relying on it for personal well-being.
Sam Altman worried about emotional attachment with AI
The OpenAI CEO responded to the widespread user backlash following the launch of GPT-5, where many users openly expressed a strong preference for the previous GPT-4o model. He writes that the attachment users feel is a phenomenon previously unseen.
“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),” writes Altman in his post.
Beyond the immediate user experience, Altman’s concerns concern the ethics of AI as a therapeutic tool. He highlighted the “more subtle” risks that make him worry, such as an AI unknowingly pushing a user away from their long-term well-being.
“A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today… If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer-term well-being (however they define it), that’s bad,” he mentioned.
Altman wants society to think on safety and freedom
Altman also addressed the critical issue of user safety and freedom. While he highlighted a core principle of “treat adult users like adults,” he admitted that there are extreme cases, wherein a user in a “mentally fragile state” struggling to distinguish between reality and fiction. Altman believes that situations like these demand intervention from professionals.
“Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle… We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”
“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions… Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive,” he added.