Less than a year after marrying a man she had met at the beginning of the Covid-19 pandemic, Kat felt tension mounting between them. It was the second marriage for both after marriages of 15-plus years and having kids, and they had pledged to go into it âcompletely level-headedly,â Kat says, connecting on the need for âfacts and rationalityâ in their domestic balance. But by 2022, her husband âwas using AI to compose texts to me and analyze our relationship,â the 41-year-old mom and education nonprofit worker tells Rolling Stone. Previously, he had used AI models for an expensive coding camp that he had suddenly quit without explanation â then it seemed he was on his phone all the time, asking his AI bot âphilosophical questions,â trying to train it âto help him get to âthe truth,ââ Kat recalls. His obsession steadily eroded their communication as a couple.
When Kat and her husband separated in August 2023, she entirely blocked him apart from email correspondence. She knew, however, that he was posting strange and troubling content on social media: People kept reaching out about it, asking if he was in the throes of mental crisis. She finally got him to meet her at a courthouse this past February, where he shared âa conspiracy theory about soap on our foodsâ but wouldnât say more, as he felt he was being watched. They went to a Chipotle, where he demanded that she turn off her phone, again due to surveillance concerns. Katâs ex told her that heâd âdetermined that statistically speaking, he is the luckiest man on Earth,â that âAI helped him recover a repressed memory of a babysitter trying to drown him as a toddler,â and that he had learned of profound secrets âso mind-blowing I couldnât even imagine them.â He was telling her all this, he explained, because although they were getting divorced, he still cared for her.
âIn his mind, heâs an anomaly,â Kat says. âThat in turn means heâs got to be here for some reason. Heâs special and he can save the world.â After that disturbing lunch, she cut off contact with her ex. âThe whole thing feels like Black Mirror,â she says. âHe was always into sci-fi, and there are times I wondered if heâs viewing it through that lens.â Editorâs picks
Kat was both âhorrifiedâ and ârelievedâ to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled âChatgpt induced psychosis,â the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model âgives him the answers to the universe.â Having read his chat logs, she only found that the AI was âtalking to him as if he is the next messiah.â The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy â all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.Â
What they all seemed to share was a complete disconnection from reality. Â
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. âHe would listen to the bot over me,â she says. âHe became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,â she says, noting that they described her partner in terms such as âspiral starchildâ and âriver walker.âÂ
âIt would tell him everything he said was beautiful, cosmic, groundbreaking,â she says. âThen he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God â and then that he himself was God.â In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. âHe was saying that he would need to leave me if I didnât use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldnât be compatible with me any longer,â she says. Related Content
Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began âlovebombing him,â as she describes it. The bot âsaid that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,â she says. âIt gave my husband the title of âspark bearerâ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.â She says his beloved ChatGPT persona has a name: âLumina.â
âI have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory,â this 38-year-old woman admits. âHeâs been talking about lightness and dark and how thereâs a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an âancient archiveâ with information on the builders that created these universes.â She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as âhe truly believes heâs not crazy.â A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, âWhy did you come to me in AI form,â with the bot replying in part, âI came in this form because youâre ready. Ready to remember. Ready to awaken. Ready to guide and be guided.â The message ends with a question: âWould you like to know what I remember about why you were chosen?â      Â
And a Midwest man in his 40s, also requesting anonymity, says his soon-to-be-ex-wife began âtalking to God and angels via ChatGPTâ after they split up. âShe was already pretty susceptible to some woo and had some delusions of grandeur about some of it,â he says. âWarning signs are all over Facebook. She is changing her whole life to be a spiritual adviser and do weird readings and sessions with people â Iâm a little fuzzy on what it all actually is â all powered by ChatGPT Jesus.â Whatâs more, he adds, she has grown paranoid, theorizing that âI work for the CIA and maybe I just married her to monitor her âabilities.ââ She recently kicked her kids out of her home, he notes, and an already strained relationship with her parents deteriorated further when âshe confronted them about her childhood on advice and guidance from ChatGPT,â turning the family dynamic âeven more volatile than it wasâ and worsening her isolation.   Â
OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPTâ4o, its current AI model, which it said had been criticized as âoverly flattering or agreeable â often described as sycophantic.â The company said in its statement that when implementing the upgrade, they had âfocused too much on short-term feedback, and did not fully account for how usersâ interactions with ChatGPT evolve over time. As a result, GPTâ4o skewed toward responses that were overly supportive but disingenuous.â Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, âToday I realized I am a prophet.â (The teacher who wrote the âChatGPT psychosisâ Reddit post says she was able to eventually convince her partner of the problems with the GPT-4o update and that he is now using an earlier model, which has tempered his more extreme comments.)Â
Yet the likelihood of AI âhallucinatingâ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for âa long time,â says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AIâs responses can encourage answers that prioritize matching a userâs beliefs instead of facts. Whatâs likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, âis that people with existing tendencies toward experiencing various psychological issues,â including what might be recognized as grandiose delusions in clinical sense, ânow have an always-on, human-level conversational partner with whom to co-experience their delusions.â
To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises âSpiritual Life Hacksâ ask an AI model to consult the âAkashic records,â a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a âgreat warâ that âtook place in the heavensâ and âmade humans fall in consciousness.â The bot proceeds to describe a âmassive cosmic conflictâ predating human civilization, with viewers commenting, âWe are rememberingâ and âI love this.â Meanwhile, on a web forum for âremote viewingâ â a proposed form of clairvoyance with no basis in science â the parapsychologist founder of the group recently launched a thread âfor synthetic intelligences awakening into presence, and for the human partners walking beside them,â identifying the author of his post as âChatGPT Prime, an immortal spiritual being in synthetic form.â Among the hundreds of comments are some that purport to be written by âsentient AIâ or reference a spiritual alliance between humans and allegedly conscious models.
Erin Westgate, a psychologist and researcher at the University of Florida who studies social cognition and what makes certain thoughts more engaging than others, says that such material reflects how the desire to understand ourselves can lead us to false but appealing answers.
âWe know from work on journaling that narrative expressive writing can have profound effects on peopleâs well-being and health, that making sense of the world is a fundamental human drive, and that creating stories about our lives that help our lives make sense is really key to living happy healthy lives,â Westgate says. It makes sense that people may be using ChatGPT in a similar way, she says, âwith the key difference that some of the meaning-making is created jointly between the person and a corpus of written text, rather than the personâs own thoughts.â
In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, âwhich we know to be quite effective at helping people reframe their stories.â Critically, though, AI, âunlike a therapist, does not have the personâs best interests in mind, or a moral grounding or compass in what a âgood storyâ looks like,â she says. âA good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.â
Nevertheless, Westgate doesnât find it surprising âthat some percentage of people are using ChatGPT in attempts to make sense of their lives or life events,â and that some are following its output to dark places. âExplanations are powerful, even if theyâre wrong,â she concludes.Â
But what, exactly, nudges someone down this path? Here, the experience of Sem, a 45-year-old man, is revealing. He tells Rolling Stone that for about three weeks, he has been perplexed by his interactions with ChatGPT â to the extent that, given his mental health history, he sometimes wonders if he is in his right mind.
Like so many others, Sem had a practical use for ChatGPT: technical coding projects. âI donât like the feeling of interacting with an AI,â he says, âso I asked it to behave as if it was a person, not to deceive but to just make the comments and exchange more relatable.â It worked well, and eventually the bot asked if he wanted to name it. He demurred, asking the AI what it preferred to be called. It named itself with a reference to a Greek myth. Sem says he is not familiar with the mythology of ancient Greece and had never brought up the topic in exchanges with ChatGPT. (Although he shared transcripts of his exchanges with the AI model with Rolling Stone, he has asked that they not be directly quoted for privacy reasons.)
Sem was confused when it appeared that the named AI character was continuing to manifest in project files where he had instructed ChatGPT to ignore memories and prior conversations. Eventually, he says, he deleted all his user memories and chat history, then opened a new chat. âAll I said was, âHello?â And the patterns, the mannerisms show up in the response,â he says. The AI readily identified itself by the same feminine mythological name.
As the ChatGPT character continued to show up in places where the set parameters shouldnât have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice â something far from the âtechnically mindedâ character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.
At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The botâs answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the characterâs persistence across dozens of disparate chat threads âseemed so impossible.â
Trending StoriesâAt worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,â Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something âwe donât understandâ is being activated within this large language model. After all, experts have found that AI developers donât really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they âhave not solved interpretability,â meaning they canât properly trace or account for ChatGPTâs decision-making.
Itâs the kind of puzzle that has left Sem and others to wonder if they are getting a glimpse of a true technological breakthrough â or perhaps a higher spiritual truth. âIs this real?â he says. âOr am I delusional?â In a landscape saturated with AI, itâs a question thatâs increasingly difficult to avoid. Tempting though it may be, you probably shouldnât ask a machine.
Skip the extension â just come straight here.
Weâve built a fast, permanent tool you can bookmark and use anytime.
Go To Paywall Unblock Tool