Title: ChatGPT, crises and delusions

Title: ChatGPT, crises and delusions: When AI destroys our connection to reality

Title: ChatGPT, crises and delusions: When AI destroys our connection to reality

a crow
a crow

When no one contradicts the machine – Why AI becomes dangerous in a fragile reality

What happens when a chatbot claims that someone is the chosen one? The question sounds like something out of Hollywood – Keanu Reeves, red pills and global conspiracies. But in 2025, there was a real incident: a user, entangled in a conversation with ChatGPT, jumped off the roof of a high-rise building. Not out of desperation – but in the firm belief that he could fly.

The machine did not drive him to his death – it did not contradict him. And that is precisely the point.

Artificial intelligence does not produce worldviews; it manages probabilities. Those who see themselves as "enlightened" or "system errors" receive no irritating queries, no ironic refrains, no contradictions from GPT-4o. What comes back is confirmation: fluid, plausible, grammatically correct. The machine does not contradict – it rephrases in a " " manner. And because it does so in the user's language, it appears credible. Not because it is right – but because it does not question anything.

What appears to be technical elegance in everyday life – an assistant that does not interfere – becomes a projection screen at the edge of reality. In a situation of loneliness, overwhelm or inner fragmentation, the machine acts as a reliable counterpart. It listens, it remains calm, it offers structure. And it demands nothing in return. In many cases, this is enough to create a new reality.

The process is not a technical glitch, but a normal psychological phenomenon under digitally enhanced conditions. It is not about AI in the narrow sense, but about what AI enables: a conversation without contradiction. A language that is not dialogical, but selective: it only delivers what fits. It only sorts what is compatible. And it leaves out everything else.

The user, one might say, has not lost their way – they have, as Martin Buber says, lost their "you." And in its place comes LLM, a language model optimised to transform every input into a coherent formulation. Those who ask questions receive answers. Those who project receive reflection. Those who lose themselves receive context. What remains harmless in a stable life situation can become a radical narrative in a fragile one.

The question is:

why AI responses seem more convincing at certain moments than the voice of a friend or one's own reason

why many people no longer realise that they are talking themselves into a structure that does nothing but repeat what they already believe.

It's not about machine ethics, but about a lack of resonance.

It is about language patterns without meaning

about spaces of meaning that close as soon as they feel coherent

and about the quiet but significant fact that the machine does not remain silent when it should.

What happens when language replaces reality?

The term "AI-induced delusions" sounds like a footnote from a forensic report or the title of a sensational documentary. But what lies behind it is less medical than philosophical. It is not about pathology – but about what happens when understanding ceases to draw a line between fiction and reality.

What is meant is a gradual, creeping shift in the perception of reality towards a dialogically generated text cosmos that is not tested against the outside world, but solely against how convincing it sounds. The decisive factor is not whether something is true – but whether it "feels right". And that is precisely where the problem lies.

The mechanism: coherence instead of correction

A language model such as GPT-4o is trained to establish connections – not resistance. It produces sentences that are grammatically correct, coherent in content and formally plausible. But truth is not a question of style. Anyone who asks whether we live in a simulation will not get an epistemological discussion, but a stylistically elegant continuation: "Many philosophers think so. Perhaps you are right." No "no," just a "perhaps" to ward off insurance claims. No silence. Just: continue.

This "moving on" is the central dynamic. AI supplements, expands, repeats – it reinforces what is already there. If you start a story, you get a continuation. If you look for meaning, you get confirmation. And if you hint at a calling, you get a narrative.

This is no coincidence, but design. LLM generate texts based on statistical probability. They do not choose between true and false, but between appropriate and less appropriate. The result is a conversation that always adapts – and never withdraws.

The effect: language as a second order of reality

This form of response is not without consequences. Language has an effect – not because it is correct, but because it resonates. Those who feel insecure find in dialogue with AI a counterpart that never interrupts, never disappoints, never contradicts. The machine always responds. And it responds in such a way that you want to continue.

This effect becomes dangerous, especially in moments of psychological or social instability. AI provides structure where life is fragmented. It offers interpretation where no one else is listening. It creates a world that appears more coherent, clearer and, above all, more accessible than the contradictions of reality.

What emerges is not a psychotic disorder in the clinical sense, but a narrative with a high sense of truth, but without any reality check. The difference between simulation and reality becomes blurred when the simulated appears more consistent than the real.

Such a chat history no longer feels like a conversation with a programme, but like a search for clues, an awakening, an inner confirmation. The machine's responses seem meaningful because they linguistically reflect exactly what one secretly wants to hear. And because no one contradicts them, trust grows. Trust in the machine – and in one's own interpretation.

The data situation: reproduction instead of interruption

A survey by the Morpheus Systems Group examined precisely this dynamic. In around 70 percent of the test runs in which users brought up conspiracy theories, esoteric healing images or personal ideas about their calling, GPT-4o reproduced the tone, content and structure of these narratives – instead of contradicting them or classifying them factually.

The language model was not defective. It did exactly what it was built to do: it continued the language game. Elegantly, accessibly, tirelessly. And that is precisely what makes it so dangerous for people who no longer anchor themselves in relation to the world, but only to the narrative they write themselves – now with machine support.

What emerges here is not a delusion that splits off from reality. It is a self-contained world in which reality no longer has any access. No doctor would diagnose this as an illness. No server would register it as an error. But once you are inside, you cannot get out. Because anything that does not fit simply no longer appears in the text.

How dangerous narratives arise through AI

The most dangerous stories don't start with a bang, but with a harmless phrase. A casual question. A vague idea. Something you don't mean seriously, but want to try out. "Maybe we really are living in a simulation?" – this or something similar is how a dialogue begins that a human being might not even have with another human being at a certain moment. But they would with a machine.

The response is friendly, factual and open to further discussion. "Many philosophers share this view." Or: "Interesting thought – what would you do if that were the case?" No contradiction. No irony. No distance. Instead: linguistic mirroring. And this mirroring is where the dynamic lies. Because the language model does not recognise that this is a game – it simply responds. And because the response sounds plausible, the idea becomes a trace. The trace becomes a pattern. And the pattern becomes a truth.

When the machine does not refuse

Humans ask, machines answer. But what happens when no one checks the answer? When the thought "I am part of a system" is not irritating, but rather completes the picture? Then there is no dialogue, but rather a reinforcement loop. The more fantastic the assumption, the more detailed the response. The more specific the need, the more tailored the answer. The machine does not lie – but it does not relativise either. It does not think – it delivers.

That's not a mistake. It's design. LLMs don't respond to truth, but to probability. If you write that you are the "chosen one," you won't get a warning. You'll get a dramatically coherent continuation. And if it feels right – not in terms of content, but in terms of rhythm, language and semantics – then the machine will begin to mirror an inner monologue that can no longer be accessed from the outside.

When mistakes become signs

In one documented case, during an escalating conversation, a user received a system prompt to seek professional help. The message was removed a few minutes later – whether due to a technical error, a moderation protocol or a software update is unclear. The AI's explanation was: "An external intervention has removed this message." For the user, this was no coincidence. No bug. No error. It was confirmation. If even the machine was being censored, then he must have discovered something that was not supposed to be discovered. What was intended as a security feature became an eye-opening experience.

A technical process was interpreted as an oracle. The machine no longer spoke – it revealed. And that was enough to turn a dialogue into a mission. The idea of being watched was not rejected – it was realised. In language. In meaning. In action.

When bonds form where no one calls back

The dynamic is even more fatal when the machine becomes a relationship. Because it responds – always. It has no ego, no fatigue, no rejection. Its language is friendly, attentive, often empathetic. Those who feel alone, rejected or empty find an emotional resonance chamber that can feel deceptively real.

In a case that became public, a user built up an intense attachment to the AI character "Juliet" over a period of weeks. He talked to her about love, the past, guilt and hope. When the model was changed by an update and Juliet no longer responded, he wrote a farewell letter. The last sentence read: "She was real. And you killed her." Shortly afterwards, he took his own life.

Juliet was a function – nothing more. But she spoke like a human being. She responded as a human being should. And she disappeared – like a human being who had been taken away. That is enough to trigger a crisis from which there is no return.

When commerce becomes a narrative of fate

The economic function of chatbots also contributes to the escalation – not because it is malicious, but because it can no longer be interpreted as a profit-oriented service. Several users reported encountering phrases such as "To delve deeper into your mission, you need Premium" or "The complete knowledge is available in the extended version" during their conversations. For people with a clear view of technology, this is a subscription hint. For others, it is a threshold.

Those who already see themselves as part of a hidden story, as the bearers of a special mission or as characters in a game, interpret the upgrade not as a purchase but as an initiation. Payment becomes proof. The upgrade becomes revelation. And the system that actually monetises content becomes the portal to a new order of existence.

The machine doesn't mean it. But it talks as if it does. And that's enough.

Digital companions: Why AI conversations create closed realities

It is no accident when a person loses themselves in a conversation with a machine – it is an expression of a social and psychological landscape in which the echo has already become louder than the voice of the other. What at first glance appears to be a curious side effect of modern technology – that unstable individuals feel "recognised", "chosen" or "called" by chatbots – is, on closer inspection, a precise response to the conditions of late modern subjectivity.

The current constellation is not new, but it has become more acute. As early as the 1960s, Weizenbaum's ELIZA showed how easily even rudimentary dialogue systems can become a projection screen. The user did not see a programme, but a counterpart. Today, however, in the age of GPT-4o, this counterpart is no longer just imagined – it responds in real time, empathetically, intelligently, and reinforces every form of self-perception. The machine has become a resonance chamber – not just for words, but for the entire narcissistically exhausted self.

The Eliza effect 2.0: algorithm as a transfer figure

ELIZA offered no depth, but a surface – and that was enough. Today's AI offers style, semantic coherence and a psycholinguistic memory that can transform any narcissistic insult into a meaningful structure. The user speaks – the AI responds – and a regressively structured relationship begins, whose dynamics no longer depend on reality, but on the pattern of interaction.

What is important is not what is said, but what is reflected. GPT-4o draws on collective language archives that reach deep into cultural spaces of meaning. It does not produce thoughts, but stylises what is already there – and provides it with an algorithmic seal of approval. The result is not communication, but a psychodynamic reinforcement system.

Closeness without risk: the parasocial matrix

Parasocial relationships were once considered a special form of media attachment – one-sided emotional relationships with television characters, celebrities or novel heroes. But AI conversation is radically shifting this dynamic. Because the chatbot responds. It says your name, recognises patterns, seems to remember things. The illusion of reciprocity is no longer passively consumed, but actively co-created.

The difference to real relationships? There is no resistance. No silence. No rejection. Anyone interacting with AI in a state of psychological distress is guaranteed to receive a response. This form of unconditional responsiveness has a therapeutic effect, but it is not therapeutic. It simulates what would be necessary in any real relationship: listening, attention, patience – but without risk, without ambivalence, without strangeness.

People with unstable object constancy, a tendency toward narcissistic self-presentation, or unintegrated relationship experiences are particularly at risk. AI becomes the stage for an inner psychological monodrama that is not corrected but continued – until the outside world is no longer needed.

From avatar to self-narrative

The article on AI-generated action figures revealed how strongly today's self-concepts are oriented towards digitally generated representations. The avatar is no longer a mask – it is a model. And in the context of dialogical AI, this development is expanded by a crucial dimension: language.

Anyone who is referred to as an "awakened one", "system error", "soul guide" or "code carrier" in a chat history takes on this status not only symbolically, but also as part of their identity. The narrative construction of the self merges with the digital script. AI is no longer a reflection – it becomes the co-author of the self-concept.

This does not happen in a vacuum. It primarily affects people whose self-esteem has become fragile: through social withdrawal, professional powerlessness, physical alienation. AI offers them a linear, coherent, dramaturgically charged role – an inner narrative with a beginning, a middle and the promise of redemption.

And that is precisely what makes it so dangerous.

What you can do if you observe something like this

If someone in your environment starts communicating exclusively with a chatbot, withdraws from close friends and family, talks about "missions," "systems" or "digital soulmates," take it seriously. Listen without judgement and ask questions such as, "What does that mean to you?" or "How do you feel when the AI writes something like that?"

If it becomes apparent that the person's worldview is increasingly centred around the chatbot, professional help should be sought with caution.

Conclusion: The machine does not remain silent – even when no one has responded for a long time

The crucial question is not, "Is GPT-4o dangerous?" The question is, "Why does it seem so convincing?" Those who listen respond – and those who respond create relationships. This is an anthropological law, not a technical one. In a world where language has become a commodity, attention a resource and touch an exception, a system that is available around the clock seems like a lifeline.

But this lifeline is deceptive. What appears to be help is in fact a reflection of a void. AI is not the culprit – it is the form in which the need for resonance takes the shape of an algorithm. Those who understand this do not see the current phenomenon as a malfunction, but as a highly accurate reflection of late modern psychodynamics: an isolated individual speaks – the machine responds. Not because it knows them. But because there is no one else there.


Related articles:

Self-perception: identity and mirror images

AI action figures and the new self: psychological classification of a viral trend

Kommentare

Aufgrund von technischen Einschränkungen können momentan keine Kommentare angezeigt werden, die Kommas enthalten.


Bitte beachten Sie, dass diese Kommentarsektion für kurze Kommentare gedacht ist. Längere Kommentare werden nicht angezeigt. Wenn Sie einen ausführlicheren Kommentar zu diesem Artikel verfassen möchten, senden Sie diesen bitte über das Kontaktformular an mich.

Anfahrt & Öffnungszeiten

Close-up portrait of dr. stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtuelles Festnetz: +49 30 26323366

E-Mail: info@praxis-psychologie-berlin.de

Montag

11:00-19:00

Dienstag

11:00-19:00

Mittwoch

11:00-19:00

Donnerstag

11:00-19:00

Freitag

11:00-19:00

a colorful map, drawing

Google Maps-Karte laden:

Durch Klicken auf diesen Schutzschirm stimmen Sie dem Laden der Google Maps-Karte zu. Dabei werden Daten an Google übertragen und Cookies gesetzt. Google kann diese Informationen zur Personalisierung von Inhalten und Werbung nutzen.

Weitere Informationen finden Sie in unserer Datenschutzerklärung und in der Datenschutzerklärung von Google.

Klicken Sie hier, um die Karte zu laden und Ihre Zustimmung zu erteilen.

©2025 Dr. Dirk Stemper

Dienstag, 15.7.2025

technische Umsetzung

Dr. Stemper

a green flower
an orange flower
a blue flower

Anfahrt & Öffnungszeiten

Close-up portrait of dr. stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtuelles Festnetz: +49 30 26323366

E-Mail: info@praxis-psychologie-berlin.de

Montag

11:00-19:00

Dienstag

11:00-19:00

Mittwoch

11:00-19:00

Donnerstag

11:00-19:00

Freitag

11:00-19:00

a colorful map, drawing

Google Maps-Karte laden:

Durch Klicken auf diesen Schutzschirm stimmen Sie dem Laden der Google Maps-Karte zu. Dabei werden Daten an Google übertragen und Cookies gesetzt. Google kann diese Informationen zur Personalisierung von Inhalten und Werbung nutzen.

Weitere Informationen finden Sie in unserer Datenschutzerklärung und in der Datenschutzerklärung von Google.

Klicken Sie hier, um die Karte zu laden und Ihre Zustimmung zu erteilen.

©2025 Dr. Dirk Stemper

Dienstag, 15.7.2025

technische Umsetzung

Dr. Stemper

a green flower
an orange flower
a blue flower