By now, it’s evident that AI chatbots like ChatGPT tend to hallucinate under certain circumstances, and hence, their creators often advise against taking the outputs as they are – there’s always a scope of mistakes in the responses. However, on the “Mostly Human” podcast interview hosted by Laurie Segall, a hilarious moment involving OpenAI’s CEO, Sam Altman, has gone viral. During the interview, ChatGPT’s voice mode dramatically hallucinated in real time while being tested as a stopwatch, with Altman himself watching the failure unfold.

The clip, originally from TikTok creator @huskistaken, shows a man asking ChatGPT in voice mode to time him while running a mile. He starts running and stops just seconds later, but the AI confidently claims he took over 10 minutes to complete the run. When the user corrects it, ChatGPT doubles down, insisting its timing is accurate and that the user is mistaken.

A classic case of AI hallucination caught live on camera.

Altman’s reaction to the ChatGPT fumble

The video then cuts to Sam Altman being shown the exact clip during an interview on the Mostly Human podcast. Altman’s reaction has become the highlight, as he lets out a long and silent laugh. He appears visibly uncomfortable and stammers, “Uh, maybe, uhhh…” when asked about the issue. He eventually acknowledges it as a “known issue” with the current voice model in ChatGPT, noting that it lacks proper tool integration for accurate timing.

When @huskistaken showed ChatGPT the interview, the AI chatbot once again showed a case of hallucination, admitting on camera once again that it was correct in its timing data, as it can keep track of timing, even after Sam Altman admitting that ChatGPT can’t keep track of timing.

The moment has blown up online as a comedy, with some users calling it “comedy gold” and “the most relatable AI fail ever.”

Why ChatGPT hallucinated

According to Altman, the hallucination occurs because the voice mode currently tries to answer everything directly instead of admitting limitations or using external tools for precise timing. Altman confirmed that OpenAI is working on improvements but the incident highlights one of the usual challenges in large language models — confidently making up facts when unsure, i.e., hallucinating.

Hence, users are always advised to cross-check every factual response or intellectual/emotional suggestion from any AI chatbot.