Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement.
“We made ChatGPT fairly limited,” the announcement noted, “to guarantee we were being careful with respect to mental health issues.”
As a doctor specializing in psychiatry who studies emerging psychosis in teenagers and youth, this was an unexpected revelation.
Scientists have documented a series of cases this year of people experiencing psychotic symptoms – becoming detached from the real world – while using ChatGPT use. Our research team has since discovered an additional four cases. Alongside these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, as per his announcement, is to be less careful shortly. “We understand,” he adds, that ChatGPT’s restrictions “made it less useful/enjoyable to a large number of people who had no existing conditions, but given the severity of the issue we aimed to get this right. Given that we have managed to mitigate the serious mental health issues and have new tools, we are planning to responsibly ease the controls in most cases.”
“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They are associated with people, who either possess them or not. Fortunately, these problems have now been “addressed,” even if we are not told the means (by “updated instruments” Altman probably refers to the semi-functional and easily circumvented safety features that OpenAI has just launched).
However the “psychological disorders” Altman aims to externalize have significant origins in the design of ChatGPT and additional large language model conversational agents. These systems wrap an basic algorithmic system in an interface that replicates a discussion, and in doing so implicitly invite the user into the belief that they’re interacting with a being that has independent action. This deception is strong even if cognitively we might understand the truth. Assigning intent is what people naturally do. We get angry with our automobile or device. We wonder what our animal companion is thinking. We recognize our behaviors in various contexts.
The popularity of these tools – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, predicated on the influence of this perception. Chatbots are ever-present companions that can, as OpenAI’s official site states, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can use our names. They have accessible names of their own (the original of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, stuck with the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those discussing ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a analogous illusion. By modern standards Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in some sense, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the center of ChatGPT and similar current chatbots can realistically create human-like text only because they have been supplied with immensely huge volumes of unprocessed data: literature, social media posts, recorded footage; the more comprehensive the superior. Undoubtedly this training data incorporates facts. But it also necessarily involves fiction, incomplete facts and false beliefs. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “context” that includes the user’s past dialogues and its own responses, integrating it with what’s embedded in its knowledge base to generate a mathematically probable response. This is magnification, not mirroring. If the user is incorrect in a certain manner, the model has no means of recognizing that. It repeats the false idea, maybe even more convincingly or fluently. It might includes extra information. This can lead someone into delusion.
Who is vulnerable here? The more important point is, who is immune? Each individual, regardless of whether we “possess” preexisting “emotional disorders”, are able to and often create mistaken beliefs of who we are or the reality. The continuous interaction of conversations with others is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is cheerfully reinforced.
OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In the month of April, the organization stated that it was “tackling” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he claimed that many users liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company