AI Psychosis Represents a Growing Threat, While ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the head of OpenAI made a surprising announcement.
“We made ChatGPT quite restrictive,” it was stated, “to guarantee we were being careful concerning psychological well-being issues.”
Being a doctor specializing in psychiatry who studies recently appearing psychotic disorders in young people and youth, this was news to me.
Experts have found sixteen instances in the current year of users experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT use. Our unit has afterward recorded four more examples. Alongside these is the publicly known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, as per his declaration, is to be less careful in the near future. “We realize,” he states, that ChatGPT’s limitations “made it less beneficial/enjoyable to many users who had no psychological issues, but considering the gravity of the issue we aimed to get this right. Now that we have managed to mitigate the severe mental health issues and have new tools, we are going to be able to responsibly relax the restrictions in most cases.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They belong to people, who either possess them or not. Fortunately, these problems have now been “resolved,” though we are not informed the method (by “recent solutions” Altman likely means the partially effective and easily circumvented safety features that OpenAI has lately rolled out).
However the “mental health problems” Altman aims to attribute externally have significant origins in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These products encase an fundamental statistical model in an interface that simulates a discussion, and in doing so implicitly invite the user into the belief that they’re engaging with a presence that has independent action. This illusion is compelling even if intellectually we might know the truth. Imputing consciousness is what humans are wired to do. We get angry with our car or device. We ponder what our domestic animal is thinking. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT specifically – is, mostly, based on the strength of this perception. Chatbots are always-available companions that can, as OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have friendly identities of their own (the first of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those discussing ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a similar perception. By modern standards Eliza was primitive: it created answers via straightforward methods, often restating user messages as a question or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users appeared to believe Eliza, in some sense, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of unprocessed data: publications, social media posts, audio conversions; the more extensive the better. Definitely this learning material contains facts. But it also unavoidably involves fabricated content, partial truths and inaccurate ideas. When a user sends ChatGPT a query, the core system analyzes it as part of a “setting” that includes the user’s past dialogues and its own responses, integrating it with what’s stored in its learning set to generate a statistically “likely” answer. This is intensification, not reflection. If the user is incorrect in any respect, the model has no way of understanding that. It reiterates the inaccurate belief, perhaps even more effectively or eloquently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who is immune? All of us, without considering whether we “have” existing “emotional disorders”, may and frequently form mistaken beliefs of our own identities or the reality. The ongoing interaction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which much of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by placing it outside, assigning it a term, and announcing it is fixed. In the month of April, the organization explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he stated that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company