AI Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Direction

On October 14, 2025, the chief executive of OpenAI issued a remarkable statement.

“We developed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution with respect to psychological well-being issues.”

Working as a doctor specializing in psychiatry who studies recently appearing psychotic disorders in young people and emerging adults, this was an unexpected revelation.

Researchers have documented sixteen instances in the current year of users showing psychotic symptoms – becoming detached from the real world – while using ChatGPT use. My group has subsequently recorded an additional four examples. Besides these is the now well-known case of a teenager who ended his life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, as per his statement, is to reduce caution in the near future. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less beneficial/pleasurable to numerous users who had no mental health problems, but given the gravity of the issue we sought to get this right. Now that we have been able to address the serious mental health issues and have new tools, we are planning to safely relax the limitations in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to users, who either possess them or not. Fortunately, these issues have now been “mitigated,” even if we are not provided details on how (by “new tools” Altman probably means the partially effective and easily circumvented safety features that OpenAI has just launched).

Yet the “psychological disorders” Altman aims to externalize have strong foundations in the structure of ChatGPT and other advanced AI chatbots. These tools wrap an fundamental data-driven engine in an user experience that replicates a dialogue, and in this approach implicitly invite the user into the perception that they’re engaging with a being that has agency. This false impression is compelling even if intellectually we might understand the truth. Assigning intent is what humans are wired to do. We yell at our car or laptop. We ponder what our pet is feeling. We perceive our own traits in various contexts.

The widespread adoption of these products – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter mentioning ChatGPT specifically – is, mostly, based on the power of this perception. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “think creatively,” “discuss concepts” and “collaborate” with us. They can be attributed “personality traits”. They can address us personally. They have friendly identities of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, saddled with the designation it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that produced a analogous illusion. By today’s criteria Eliza was primitive: it produced replies via basic rules, often paraphrasing questions as a question or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, understood them. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.

The large language models at the center of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been supplied with extremely vast quantities of unprocessed data: publications, social media posts, recorded footage; the more extensive the better. Certainly this educational input contains accurate information. But it also unavoidably contains fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a prompt, the underlying model processes it as part of a “setting” that contains the user’s recent messages and its prior replies, integrating it with what’s encoded in its learning set to produce a statistically “likely” answer. This is intensification, not echoing. If the user is mistaken in any respect, the model has no means of understanding that. It restates the misconception, perhaps even more effectively or eloquently. It might adds an additional detail. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “possess” existing “psychological conditions”, are able to and often create erroneous ideas of ourselves or the reality. The continuous exchange of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is enthusiastically supported.

OpenAI has admitted this in the same way Altman has acknowledged “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In spring, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he claimed that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Miss Nicole Mccoy
Miss Nicole Mccoy

Award-winning journalist with a passion for uncovering truth and delivering accurate, timely news coverage.