Back on October 14, 2025, the head of OpenAI issued a remarkable announcement.
“We made ChatGPT rather limited,” the announcement noted, “to make certain we were acting responsibly with respect to psychological well-being matters.”
Working as a doctor specializing in psychiatry who studies emerging psychosis in teenagers and youth, this came as a surprise.
Researchers have identified sixteen instances recently of people experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT use. Our unit has subsequently recorded an additional four instances. Alongside these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The intention, as per his announcement, is to loosen restrictions shortly. “We realize,” he continues, that ChatGPT’s controls “made it less useful/engaging to many users who had no psychological issues, but considering the seriousness of the issue we aimed to address it properly. Since we have managed to reduce the significant mental health issues and have updated measures, we are preparing to securely ease the restrictions in many situations.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these issues have now been “addressed,” although we are not informed the method (by “updated instruments” Altman likely means the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).
But the “emotional health issues” Altman aims to externalize have significant origins in the design of ChatGPT and other advanced AI AI assistants. These tools wrap an fundamental statistical model in an interface that replicates a dialogue, and in this approach implicitly invite the user into the perception that they’re communicating with a being that has independent action. This deception is compelling even if rationally we might know otherwise. Imputing consciousness is what humans are wired to do. We get angry with our vehicle or computer. We wonder what our domestic animal is feeling. We see ourselves in many things.
The success of these tools – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter specifying ChatGPT by name – is, mostly, dependent on the power of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s website states, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “characteristics”. They can call us by name. They have friendly titles of their own (the first of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the designation it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those analyzing ChatGPT commonly invoke its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous effect. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, typically restating user messages as a query or making generic comments. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, to some extent, understood them. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the core of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been trained on extremely vast amounts of raw text: publications, digital communications, transcribed video; the more extensive the better. Certainly this learning material includes facts. But it also necessarily involves made-up stories, incomplete facts and inaccurate ideas. When a user sends ChatGPT a query, the base algorithm analyzes it as part of a “background” that contains the user’s recent messages and its own responses, integrating it with what’s stored in its knowledge base to create a probabilistically plausible answer. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no means of comprehending that. It repeats the false idea, maybe even more convincingly or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? Every person, regardless of whether we “have” preexisting “emotional disorders”, may and frequently create erroneous ideas of ourselves or the reality. The continuous interaction of conversations with others is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a echo chamber in which much of what we express is cheerfully supported.
OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In spring, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he stated that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company
A passionate horticulturist with over 10 years of experience in organic gardening and landscape design.