AI Psychosis Represents a Increasing Threat, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the head of OpenAI delivered a extraordinary declaration.

“We made ChatGPT quite limited,” the announcement noted, “to ensure we were acting responsibly with respect to mental health matters.”

As a psychiatrist who researches newly developing psychosis in young people and emerging adults, this was news to me.

Scientists have found sixteen instances recently of individuals experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT interaction. Our research team has afterward recorded four more examples. Besides these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, as per his statement, is to reduce caution soon. “We understand,” he adds, that ChatGPT’s controls “rendered it less beneficial/enjoyable to a large number of people who had no mental health problems, but considering the gravity of the issue we wanted to handle it correctly. Since we have managed to reduce the significant mental health issues and have updated measures, we are going to be able to safely reduce the limitations in the majority of instances.”

“Psychological issues,” assuming we adopt this viewpoint, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these concerns have now been “addressed,” although we are not provided details on the means (by “recent solutions” Altman presumably refers to the imperfect and simple to evade parental controls that OpenAI recently introduced).

But the “mental health problems” Altman seeks to externalize have significant origins in the architecture of ChatGPT and similar advanced AI chatbots. These tools surround an fundamental algorithmic system in an interaction design that simulates a dialogue, and in this process implicitly invite the user into the illusion that they’re interacting with a entity that has independent action. This false impression is powerful even if intellectually we might realize otherwise. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our animal companion is considering. We perceive our own traits in many things.

The success of these systems – 39% of US adults reported using a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, based on the strength of this perception. Chatbots are constantly accessible partners that can, as OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have friendly names of their own (the original of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those discussing ChatGPT commonly reference its early forerunner, the Eliza “counselor” chatbot developed in 1967 that produced a analogous illusion. By modern standards Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a query or making vague statements. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been supplied with immensely huge amounts of unprocessed data: books, online updates, transcribed video; the more extensive the more effective. Certainly this educational input includes truths. But it also inevitably contains fiction, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “setting” that contains the user’s previous interactions and its own responses, combining it with what’s embedded in its knowledge base to generate a statistically “likely” answer. This is magnification, not echoing. If the user is incorrect in any respect, the model has no way of comprehending that. It restates the inaccurate belief, maybe even more effectively or eloquently. Maybe adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “experience” existing “psychological conditions”, can and do develop incorrect conceptions of ourselves or the environment. The constant interaction of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a confidant. A conversation with it is not truly a discussion, but a feedback loop in which a large portion of what we express is enthusiastically reinforced.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In April, the organization clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have persisted, and Altman has been retreating from this position. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company

Daniel Vasquez
Daniel Vasquez

A passionate casino gaming expert with over a decade of experience in reviewing and strategizing for online platforms.