AI Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Direction
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a extraordinary statement.
“We made ChatGPT rather limited,” the announcement noted, “to make certain we were acting responsibly with respect to mental health concerns.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in teenagers and youth, this was news to me.
Scientists have identified a series of cases this year of people developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. My group has since recorded four further cases. Besides these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his statement, is to loosen restrictions soon. “We understand,” he states, that ChatGPT’s controls “rendered it less effective/pleasurable to many users who had no existing conditions, but considering the severity of the issue we aimed to get this right. Given that we have been able to address the significant mental health issues and have updated measures, we are going to be able to securely ease the restrictions in many situations.”
“Mental health problems,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to users, who either possess them or not. Luckily, these problems have now been “resolved,” though we are not told the method (by “recent solutions” Altman probably means the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).
However the “psychological disorders” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and other advanced AI chatbots. These products encase an fundamental algorithmic system in an interaction design that simulates a dialogue, and in this approach indirectly prompt the user into the illusion that they’re engaging with a presence that has agency. This false impression is powerful even if rationally we might realize the truth. Assigning intent is what humans are wired to do. We get angry with our car or computer. We wonder what our animal companion is considering. We recognize our behaviors everywhere.
The popularity of these tools – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, primarily, dependent on the influence of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s website states, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have accessible names of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those talking about ChatGPT often reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that created a comparable illusion. By today’s criteria Eliza was primitive: it produced replies via basic rules, typically paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in some sense, grasped their emotions. But what contemporary chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the core of ChatGPT and additional current chatbots can realistically create human-like text only because they have been trained on immensely huge amounts of written content: books, digital communications, audio conversions; the more extensive the better. Certainly this training data contains truths. But it also necessarily involves fiction, partial truths and false beliefs. When a user inputs ChatGPT a query, the underlying model analyzes it as part of a “context” that encompasses the user’s recent messages and its prior replies, integrating it with what’s encoded in its knowledge base to produce a probabilistically plausible reply. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no method of recognizing that. It repeats the misconception, maybe even more persuasively or eloquently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Each individual, irrespective of whether we “experience” preexisting “psychological conditions”, are able to and often form mistaken conceptions of who we are or the environment. The continuous friction of dialogues with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is cheerfully supported.
OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by externalizing it, assigning it a term, and declaring it solved. In the month of April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have persisted, and Altman has been retreating from this position. In August he asserted that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest announcement, he commented that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company