AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Direction
Back on October 14, 2025, the chief executive of OpenAI issued a surprising declaration.
“We made ChatGPT quite controlled,” the announcement noted, “to ensure we were acting responsibly regarding mental health issues.”
Working as a mental health specialist who studies recently appearing psychosis in adolescents and youth, this was an unexpected revelation.
Researchers have documented 16 cases this year of individuals showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. Our unit has subsequently recorded four further cases. Alongside these is the publicly known case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The intention, according to his statement, is to reduce caution shortly. “We recognize,” he states, that ChatGPT’s limitations “made it less effective/engaging to a large number of people who had no psychological issues, but given the seriousness of the issue we wanted to get this right. Since we have been able to mitigate the severe mental health issues and have updated measures, we are preparing to responsibly relax the controls in most cases.”
“Psychological issues,” should we take this perspective, are unrelated to ChatGPT. They are associated with users, who either have them or don’t. Luckily, these problems have now been “mitigated,” even if we are not informed how (by “recent solutions” Altman likely refers to the semi-functional and readily bypassed parental controls that OpenAI recently introduced).
Yet the “emotional health issues” Altman wants to attribute externally have significant origins in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These systems surround an fundamental algorithmic system in an user experience that simulates a conversation, and in this process subtly encourage the user into the illusion that they’re communicating with a presence that has independent action. This deception is powerful even if rationally we might know otherwise. Imputing consciousness is what people naturally do. We yell at our automobile or computer. We ponder what our animal companion is feeling. We see ourselves everywhere.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% mentioning ChatGPT by name – is, in large part, based on the strength of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can use our names. They have friendly titles of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, stuck with the name it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a comparable effect. By contemporary measures Eliza was primitive: it generated responses via straightforward methods, typically rephrasing input as a inquiry or making generic comments. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can effectively produce human-like text only because they have been fed almost inconceivably large volumes of written content: literature, social media posts, audio conversions; the broader the better. Definitely this training data includes accurate information. But it also inevitably includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “context” that encompasses the user’s previous interactions and its own responses, integrating it with what’s encoded in its learning set to generate a probabilistically plausible response. This is intensification, not mirroring. If the user is wrong in any respect, the model has no method of recognizing that. It restates the false idea, maybe even more effectively or fluently. Perhaps includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “experience” preexisting “psychological conditions”, can and do create erroneous conceptions of who we are or the world. The continuous interaction of dialogues with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is enthusiastically supported.
OpenAI has acknowledged this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals appreciated ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company