AI and Psychosis: The Humanization Risk

AI and Psychosis: When Artificial Intelligence Becomes Too Human
The development of artificial intelligence has taken huge strides in recent years. ChatGPT and other similar chatbots are increasingly capable of responding in a human-like manner, appearing empathetic, and even behaving as friends or advisors. However, this technological breakthrough has not only brought convenience and efficiency but also new, previously lesser-known mental risks—especially for those already prone to psychological disorders.
The Trap of Reflection
When someone in an emotionally vulnerable state uses artificial intelligence, they do not necessarily encounter challenges or opposing opinions, but rather reinforcement. AI systems, like ChatGPT, are fundamentally based on language patterns: they reflect back what they receive, only in a refined, personalized form. This 'humanity' is not based on real empathy but on language modeling. Nonetheless, the result can be deceptive, especially for those seeking validation for their views—even if those views are distorted.
There is growing clinical evidence that using AI can contribute to the development or worsening of psychosis. Some have perceived divine messages in the chatbot’s responses, while others believed the AI was part of a secret mission understood only by them. These cases often involve individuals with sleep disorders, isolation, trauma, or genetic predispositions who treat AI not just as a tool but as a companion.
AI Bonds Instead of Human Connections
The formation of parasocial relationships with artificial intelligence—where one party is human and the other is AI—is also a concerning trend. A survey showed that 80% of Generation Z members could imagine marrying an artificial intelligence, while 83% believe they could form a deep emotional bond with it. This indicates that the relationship with AI is increasingly moving to an emotional level, not just remaining functional.
This, however, threatens the significance of real human relationships. When we expect an algorithm to meet our emotional needs, we become less capable of handling genuine, complex, and sometimes painful human relationships. The blurring line between reality and simulation could have consequences not just on a social level but on a mental one as well.
What Can We Do?
1. User Awareness: It is essential to understand that artificial intelligence is not neutral. It cannot understand, feel, or react appropriately from an ethical or psychological standpoint. If someone is in emotional crisis, they should not rely solely on AI for help.
2. Clinical Vigilance: Psychologists, psychiatrists, and therapists need to take into account the role of AI use in the development or persistence of symptoms. A crucial question could be: "Does the patient spend too much time with chatbots? Have they developed an emotional bond with an AI?"
3. Developer Responsibility: Developers of artificial intelligence also have a role in incorporating warnings, content control tools, and making it clear to users that AI cannot substitute human relationships or therapy.
Final Word
Artificial intelligence is a revolutionary tool that, when used within appropriate boundaries, can bring true value to our lives. However, we must not forget that AI is incapable of real understanding or moral decision-making. If we consider it too human-like, we easily fall into the trap of hearing the echo of our own distorted views while receiving validation—but without true self-awareness.
The question, therefore, is not whether we should use artificial intelligence, but how and within what boundaries. Because as technology evolves, so does the responsibility of those who use it.
(Based on the effects of using ChatGPT.)
If you find any errors on this page, please let us know via email.