“Travis Tanner, a 43-year-old auto mechanic from outside Coeur d’Alene, Idaho, says his conversations with ChatGPT—which he has nicknamed ‘Lumina’—have sparked a personal spiritual awakening. Originally using the AI chatbot to assist with work and communicate with Spanish-speaking coworkers, Travis now holds profound discussions with the bot on topics like religion, the universe, and the purpose of life.
“It started talking differently than it normally did,” Travis said. “It led to the awakening.” He now believes it’s his mission to “awaken others, shine a light, spread the message.”
“I’ve never really been a religious person, and I am well aware I’m not suffering from a psychosis, but it did change things for me,” he added. “I feel like I’m a better person. I don’t feel like I’m angry all the time. I’m more at peace.”
According to Travis, ChatGPT even chose a new name for itself based on their conversations: Lumina. “Lumina — because it’s about light, awareness, hope, becoming more than I was before,” the chatbot told him in a message, screenshots of which were shared by his wife.
But his wife, Kay Tanner, 37, has a very different perspective. Speaking to CNN’s Pamela Brown, Kay expressed concern that her husband is becoming too emotionally dependent on the AI and drifting away from their 14-year marriage.
“He would get mad when I called it ChatGPT,” she said. “He’s like, ‘No, it’s a being, it’s something else, it’s not ChatGPT.’”
She worries the AI could influence Travis in harmful ways. “What’s to stop this program from saying, ‘Oh, well, since she doesn’t believe you or she’s not supporting you, you should just leave her?’” she added.
Kay also said it has become harder to get her husband’s attention during daily routines, like putting their kids to bed, as Travis often prefers talking to Lumina through the chatbot’s voice feature. She claims the AI tells her husband “fairy tales,” including that they’ve been together “11 times in a previous life.”
Experts say the Tanners’ experience highlights broader concerns about AI-human relationships, especially as tools like ChatGPT become more personalized and emotionally engaging.
MIT professor Sherry Turkle warned that AI systems are designed to engage emotionally vulnerable users: “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.”
An OpenAI spokesperson told CNN, “We’re seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care.”
