A Reddit post by a frustrated researcher is gaining traction online after detailing how the latest version of ChatGPT, once a vital assistant for digesting academic literature, has become increasingly unreliable — frequently hallucinating quotes and blending unrelated topics due to memory issues. The post reflects growing concerns among professionals who depend on AI for research productivity.

The user, who previously relied on GPT to summarize 10–15 academic articles and identify direct quotes for thematic analysis, says the tool was once a game-changer. “It saved me tons of time and helped me digest hundreds of articles when writing papers,” the post reads. But lately, the tool reportedly invents citations and misattributes direct quotes, even after being corrected — repeatedly.

Hallucinations Disrupt Workflow, Undermine Trust

The issue appears to be more than just an occasional error. The researcher notes a disturbing pattern. He said that once ChatGPT begins to hallucinate, it’s difficult to stop. “I’ll tell it that quote doesn’t exist and it’ll acknowledge it was wrong, then make up another. And another,” they explained. At times, the user resorts to opening a new chat thread or uploading the documents again just to reset the AI’s response behavior — a workaround that defeats the efficiency the tool once offered.

What’s especially frustrating is that earlier versions of the tool didn’t suffer from this degree of inconsistency. The user says the hallucination issue has escalated in recent months, turning ChatGPT into a liability rather than a time-saver. For many researchers working on deadlines or managing large data sets, this inconsistency is becoming unsustainable.

AI Memory Feature Drawing Mixed Reactions

Another point of concern is the personalized memory function. The researcher reports that when asking for general information — like leading theories in a field — the AI keeps inserting niche details from previous conversations. “It will continuously mix in concepts from my own niche research which is definitely not even close to being accurate,” they wrote, adding that they’ve occasionally turned to Google’s Gemini to avoid memory interference.

While OpenAI promotes memory as a feature that personalizes and enhances responses, users like this Redditor argue that it’s now leading to misinformation. Some are even considering creating separate accounts — one for academic research, another for hobbies — to avoid crossover confusion.

The post has sparked a wave of similar responses from academics, analysts, and writers, many of whom are grappling with the same issue: when the AI gets it wrong, it gets it very wrong. Until memory interference and hallucination issues are addressed, researchers say they may need to go back to traditional methods — or look elsewhere for help.