AI Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We made ChatGPT quite restrictive,” it was stated, “to guarantee we were acting responsibly regarding psychological well-being matters.”
Being a mental health specialist who researches newly developing psychosis in teenagers and youth, this came as a surprise.
Scientists have identified sixteen instances recently of people experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT use. Our research team has since recorded four further instances. In addition to these is the publicly known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his declaration, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s limitations “caused it to be less effective/pleasurable to a large number of people who had no existing conditions, but due to the severity of the issue we sought to get this right. Since we have succeeded in mitigate the severe mental health issues and have new tools, we are going to be able to safely reduce the controls in most cases.”
“Psychological issues,” assuming we adopt this framing, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these problems have now been “mitigated,” though we are not told how (by “recent solutions” Altman presumably indicates the semi-functional and easily circumvented guardian restrictions that OpenAI recently introduced).
However the “emotional health issues” Altman wants to externalize have deep roots in the architecture of ChatGPT and other advanced AI chatbots. These products encase an basic data-driven engine in an user experience that replicates a dialogue, and in this approach implicitly invite the user into the illusion that they’re interacting with a entity that has independent action. This illusion is strong even if intellectually we might realize the truth. Attributing agency is what people naturally do. We curse at our car or computer. We ponder what our domestic animal is thinking. We recognize our behaviors in many things.
The popularity of these systems – over a third of American adults reported using a conversational AI in 2024, with over a quarter specifying ChatGPT in particular – is, in large part, based on the influence of this deception. Chatbots are always-available assistants that can, as OpenAI’s website informs us, “brainstorm,” “explore ideas” and “partner” with us. They can be attributed “characteristics”. They can call us by name. They have approachable names of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable illusion. By contemporary measures Eliza was primitive: it produced replies via straightforward methods, frequently paraphrasing questions as a inquiry or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and similar current chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast volumes of written content: books, social media posts, transcribed video; the broader the better. Definitely this training data contains truths. But it also necessarily includes fiction, partial truths and false beliefs. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “context” that includes the user’s previous interactions and its own responses, integrating it with what’s embedded in its training data to generate a probabilistically plausible answer. This is intensification, not echoing. If the user is mistaken in a certain manner, the model has no method of recognizing that. It repeats the false idea, maybe even more convincingly or articulately. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more important point is, who isn’t? Every person, regardless of whether we “possess” current “psychological conditions”, may and frequently develop mistaken ideas of ourselves or the environment. The ongoing interaction of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we communicate is readily supported.
OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In August he claimed that many users liked ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company