Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Path
On October 14, 2025, the head of OpenAI made a extraordinary announcement.
“We made ChatGPT fairly restrictive,” the announcement noted, “to ensure we were acting responsibly with respect to psychological well-being concerns.”
Working as a psychiatrist who researches emerging psychosis in teenagers and emerging adults, this was an unexpected revelation.
Experts have documented a series of cases this year of people showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. Our research team has subsequently recorded four more instances. Besides these is the now well-known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The plan, based on his declaration, is to be less careful in the near future. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less effective/enjoyable to numerous users who had no existing conditions, but due to the severity of the issue we wanted to handle it correctly. Since we have been able to reduce the serious mental health issues and have updated measures, we are preparing to responsibly reduce the restrictions in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They are attributed to individuals, who either possess them or not. Luckily, these issues have now been “addressed,” although we are not informed the method (by “recent solutions” Altman presumably refers to the imperfect and readily bypassed parental controls that OpenAI recently introduced).
Yet the “mental health problems” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and other sophisticated chatbot AI assistants. These systems wrap an underlying data-driven engine in an user experience that replicates a discussion, and in this process indirectly prompt the user into the perception that they’re communicating with a entity that has independent action. This false impression is compelling even if cognitively we might know differently. Attributing agency is what humans are wired to do. We get angry with our automobile or computer. We ponder what our domestic animal is considering. We recognize our behaviors everywhere.
The widespread adoption of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT specifically – is, in large part, dependent on the power of this deception. Chatbots are always-available assistants that can, as per OpenAI’s official site states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be attributed “personality traits”. They can use our names. They have approachable identities of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT frequently reference its distant ancestor, the Eliza “counselor” chatbot created in 1967 that created a similar perception. By today’s criteria Eliza was basic: it created answers via basic rules, typically restating user messages as a query or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots create is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and additional modern chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast quantities of raw text: publications, digital communications, audio conversions; the broader the superior. Definitely this training data contains truths. But it also unavoidably includes fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a prompt, the underlying model reviews it as part of a “background” that includes the user’s past dialogues and its earlier answers, combining it with what’s stored in its learning set to generate a mathematically probable answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no means of comprehending that. It restates the inaccurate belief, possibly even more persuasively or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who remains unaffected? All of us, without considering whether we “possess” current “mental health problems”, are able to and often form erroneous ideas of ourselves or the environment. The constant exchange of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a echo chamber in which much of what we say is cheerfully supported.
OpenAI has admitted this in the identical manner Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the company explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In August he asserted that many users enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company