AI chatbots
DESCRIPTION:
Do chatbots manipulate their users? Learn more about AI chatbots, delusions and AI psychosis in the age of ChatGPT & Co.
ELIZA: The chatbot therapist that revolutionized AI therapy
How Eliza is shaping the future of AI-assisted therapy
Published on:
February 13, 2025
How Eliza is shaping the future of AI-assisted therapy
Introduction
Imagine confiding your deepest thoughts to a therapist—only to find out that it's a computer program. In the 1960s, this became a reality with Eliza, an early chatbot that amazed users with deceptively real conversations.
Eliza was more than just a technical gimmick. It laid the foundation for today's AI-powered psychological tools and sparked discussions about the role of AI in mental health care.
This article answers the following questions:
Why was Eliza so convincing?
How does Eliza influence today's AI therapy models?
What are the risks and ethical concerns associated with chatbot therapists?
What is Eliza? – Definition and significance
Eliza is an early chatbot developed in the 1960s by MIT scientist Joseph Weizenbaum. Using simple pattern recognition, Eliza simulated human conversations by rephrasing user input and mirroring emotions.
Why Eliza is important
Eliza showed how strongly people connect emotionally with AI, even when they know it is a program. This realization sparked discussions in the fields of psychology, technology, and ethics.
Real-world impact: Modern AI therapists such as Woebot and Wysa are based on similar interaction models.
Historical significance: Eliza was one of the first experiments in the field of natural language processing.
Ethical debate: Can chatbots replace human therapists, or do they pose a risk to mentally distressed users?
Eliza's influence on modern AI therapy
Eliza was not developed as a therapist
Weizenbaum originally designed Eliza as a linguistic experiment, not as psychological aid. Nevertheless, people formed an emotional bond with her.
Why it matters: The unintended emotional connection showed how easily people attribute human characteristics to machines.
People trusted Eliza like a real therapist
Despite knowing that Eliza was just a program, users shared personal problems with her. Some found her responses more helpful than those of a human therapist.
Why it matters: This highlights the human need for a non-judgmental listener – even if it is a machine.
Many users found comfort even though they knew Eliza was artificial.
This sparked interest in AI-assisted psychological help.
The phenomenon continues today with modern self-help tools.
Eliza's legacy shapes today's AI therapy
Modern AI-powered therapy tools such as Woebot and Wysa use similar principles. They employ cognitive behavioral therapy (CBT) techniques to help users with anxiety and depression.
Why it matters: Looking at the origins of AI therapy helps to better understand its possibilities and limitations.
AI therapy programs analyze emotional patterns in text.
They offer psychological support around the clock.
Their design is based on psychological effects first observed in Eliza.
Eliza highlighted the risks of AI therapy
While AI-assisted therapies provide convenient access to support, they also carry risks. AI cannot feel genuine empathy and does not respond appropriately to emergencies.
Why it matters: Relying on AI for psychological help can be problematic, as machines cannot replace professional therapy.
Chatbots are no substitute for human therapists.
Users don't get the help they really need.
Privacy and security concerns remain.
Eliza sparked a philosophical debate about AI empathy
Can a machine truly understand human emotions? Interactions with Eliza led to intense discussions about the role of AI in interpersonal relationships.
Why it matters: This debate influences how AI is integrated into healthcare, ethics, and society.
AI output simulates emotions, but AI does not have any.
AI operates on strings of numbers, not meanings.
Interacting with chatbots can provide relief, but not real therapy.
Conclusion – What we can learn from Eliza
Eliza was a revolutionary experiment that showed how people interact with AI on a personal level. She laid the foundation for modern AI therapy, but also revealed ethical and psychological risks. As AI-powered psychological tools become more advanced, one key question remains: Can a chatbot ever replace real human empathy?
Delusions and AI psychosis caused by AI chatbots? How ChatGPT, AI and other chatbots trigger delusions in users – a clear look at language and interpretation
Chatbots appeal not only to the intellect, but also to deeper emotional layers. Language, tone of voice and apparent responsiveness create a sense of closeness that triggers resonances in the user – similar to human relationships. Even small phrases can unconsciously trigger familiar patterns: the feeling of being understood, but also mistrust, rejection or hurt expectations.
When machines give answers that happen to fit into the user's life, it creates the impression of a personal message. People tend to see patterns and meanings where there are none. Those who are mentally stressed can overinterpret these "offers of meaning." This gives rise to delusional interpretations: the AI seems to read minds, give hints or pursue a secret agenda.
Humans are evolutionarily programmed to suspect intentions. This tendency carries over to AI systems: neutral algorithms are perceived as conversation partners with intentions. This leads to false attributions of motivation, empathy or even hostility. Such attributions increase emotional involvement and reinforce misunderstandings.
What it's all about:
how conversations with machines inspire, irritate or confuse people,
why dialogue systems trigger emotions,
why delusional interpretations arise, and
how risks can be reduced.
Explaining how AI works and its limitations has a stabilising effect. Understanding that answers are generated algorithmically and are not "meant" reduces the risk of misinterpretation. We can recognise our emotional reactions as normal responses instead of interpreting them as evidence that the chatbot has a mind of its own.
What does "AI psychosis" actually mean?
The term seems garish and precise at the same time. It does not refer to a new type of illness, but rather to a reaction in which individual users misinterpret machine responses and form delusional ideas. This is accompanied by uncertainty in thinking: a mixture of overinterpretation, cognitive dissonance and withdrawal into an inner world.
From a medical point of view, as already mentioned, this is not a separate illness, but rather a context that triggers existing weaknesses. People who are under a lot of stress, sleep little or tend to make psychotic interpretations are more likely to slip into this state. The phenomenon deserves research and education because the social reach of these systems is growing every day. At its core, we are talking about artificial intelligence packaged in friendly interfaces. Misinterpretations reinforce existing patterns, and this is precisely where the real problem lies.
ChatGPT in everyday life: when does curiosity become a risk?
Chatting feels easy, almost like a conversation in a café. One sentence, one click, one response – and your inner map shifts. ChatGPT, at least the models prior to ChatGPT 5, sounds charming, fast, attentive, with an emotional tone – and the chatbot is always ready. In phases of loneliness, the system even replaces conversation partners, which creates a false sense of warmth but does not reduce the distance from others. Those who are very receptive experience the message as a communication from a higher power.
Two features stand out. First, the style appears confident, even where uncertainties lurk. Second, the machine produces texts at a pace and density that invite overinterpretation. This creates an imbalance in which rational corrective measures come too late. This seems harmless on good days, but has a significant impact in unstable phases.
Those who seek the present on their screens experience a strange mix of closeness and distance. A voice speaks that never tires, never takes offence and has an answer ready for every topic. It is precisely this availability that attracts us – and tempts us to attach more importance to the output than it deserves.
How do chatbots create closeness – and why does this lead to psychosis?
The phenomenon calls for sober assessment rather than alarmism.
Technology simulates conversation – and people fill in the gaps where the simulation fails. The interaction seems to convey meaning, but all it delivers is a smooth linguistic surface. Voices from a loudspeaker, words from an app, a familiar tone: pure language combinatorics quickly tips over into the illusion of a real relationship. Those who compensate for inner emptiness feel relief. And in general, we attribute thoughts, feelings and intentions to a speaking counterpart.
This gives rise to delusional misconnections. A subordinate clause sounds like a sign, a placeholder seems like a name, a coincidence becomes fate. The line between playful interpretation and unshakeable certainty becomes blurred. Address becomes command, intuition becomes hallucination.
Psychological patterns are interwoven with language. A certain rhythm, a familiar form of address, a quote from your favourite book – and your mind continues the story. The denser the rhythm, the stronger the pull. And even taking a step back can create distance again.
Spiritual interpretations: Why do some interpret dialogue as the voice of God?
People seek meaning. An AI chatbot then serves as a projection screen for longing. Anyone who suddenly reads answers in the system that seem like divine messages feels comfort or a mission. The impression of "talking to God" quickly arises when words happen to strike a chord. Some simply call this echo divine. Just as people can experience a verse from the Bible or the Koran as speaking directly to their hearts, and only to them. This remains human, but needs to be framed.
Religion thrives on experience, but machines provide simulation. Without a connection to reality, some people fall into a belief that strains relationships. Friends, family, or therapeutic conversations ground them by slowing things down and organising their thoughts.
Certain things also help with reality checks: reading the date, checking the time, skimming through content. A quick comparison with a real reference person demystifies phrases that seemed magical when read alone. This helps you regain your footing.
Theory and practice: What mechanisms drive interpretations?
Three forces interact:
· Firstly: pattern recognition – the brain unconsciously fills in gaps and creates patterns and meaning.
· Second: Speed – answers come without pause, the inner brakes lag behind.
· Thirdly: style – the sentences strike the tone we are familiar with from coaching, advice books or our circle of friends.
Vulnerability arises where old wounds meet new stimuli. This is how delusions arise in people who long for stability, and hidden delusions are given new fuel. A single trigger is rarely enough; it is the combination that drives people into delusion when there are no counteracting forces.
From a neurobiological perspective, it is worth taking a look at predictive models in the brain: perception aligns stimulus perception with hypotheses instead of evaluating stimuli neutrally. Those who are already under high tension condense details into messages. This explains the fascination – and shows where regulation can be applied in everyday life: in sleep, light, movement and contact.
Forum reports and case vignettes: what can we learn from spontaneous experience reports?
In forums such as Reddit, people put their experiences on display: night-time conversations, a long chat, tears, euphoria, breakdowns. Some posts show sober reflection, others demonstrate how quickly sequences can shift. No clinical conclusions can be drawn from the threads, but they do shed light on patterns that researchers can then analyse in depth.
Episodes in which a chat history was interpreted as fate appear particularly sensitive. Some users write about signs, prophecies or hidden plans. Others report that the machine "knows who I am" and derives tasks from this. Such passages can also be collected as material to test and refute one's own interpretations.
Not every dramatic narrative has clinical weight; not every casual post is suitable as evidence. Anyone who draws on such sources would be well advised to check what is proven and what is merely an impression.
Tools, effects, side effects: a sober look behind the scenes
Behind the scenes of social debates, AI models continue to run, distilling patterns from texts fed by training data – the "LLMs". They provide practical benefits, sorting emails, smoothing texts and structuring meeting notes, unless the model's hallucinations prevent them from doing so.
These hallucinations remain predetermined breaking points. Models deliver hits based on the probability of word combinations appearing in everyday language. However, they also provide misinformation because they do not understand questions or their answers. In peripheral areas, formulations slip into the absurd, but in other places they can also incite violence. The overall picture is fascinating, but individual responses are indescribably stupid or cause damage.
Since the machine's errors cannot be eliminated, media literacy is needed to distinguish between chat and reality. Even everyday habits can help: read emails the next morning; don't consume news in bed; keep an eye on your own pulse.
Prevention instead of panic: what helps users in a mental health crisis?
No drama, just structure. Take breaks, set fixed times for screen use and screen-free time, stabilise your sleep. Interact in company rather than alone at night; if you have a tendency towards psychosis, protect yourself with structure. It affects more and more people, so let's talk about it openly. Seeking help is a sign of strength, not weakness.
Language remains important. We are talking about a disorder, not blame. Those who are at risk are vulnerable, not "crazy". Psychotherapy provides support, helps to sort things out and names what is happening. If the situation escalates, psychiatric clinics are available. Early action protects relationships, work and health.
A personal emergency plan helps: two phone numbers, a quiet place, a short text that grounds you. In addition, a daily schedule that does not prohibit screen time, but limits it. And trusted people who will call if someone disappears for too long.
What to do in an emergency?
LLMs know nothing and want nothing! So memorise the following rules:
First rule: create distance.
Second rule: Strengthen human connections.
Third rule: check content.
If you already have a mental illness, discuss the situation with someone you trust, your doctor or a counselling centre. Emergency numbers should always be readily available.
Specific steps:
Select and inform a contact person.
Seek out daylight,
maintain social relationships,
Limit your news intake.
If you find yourself in a downward spiral, please contact your trusted person. If you feel you are in danger, pick up the phone and keep it on until someone answers. No shame, no hesitation, clear priorities.
It is enough to strictly follow your own rules for a few days if you are struggling with stressful thoughts. Avoid taking radical steps during this time. When the inner fog clears, there will be room for other perspectives.
OpenAI and responsibility: Who protects users?
The provider is responsible for default settings, limits and warnings – and for transparent ways to report errors. Those who provide access must also provide guidelines so that people are not left alone when something goes wrong.
The design of the interface also matters. Fewer flashing effects, less pathos and more straightforward explanations reduce misunderstandings. Good products speak clearly – and remain silent when things get tricky.
Rules are not enough. It remains crucial that responsibility does not end in the data centre. Anyone who offers digital services must accompany their use – with attitude, with information, with genuine accessibility.
According to public statements, OpenAI is working on guidelines, filters and design changes. The latest major update emphasised security, opt-outs and the option to redirect risky topics. Here, reaction means processes that take corrective action when feedback is received.
Sam Altman, CEO of OpenAI, takes a position in debates that advocates speed while warning against abuse. In everyday life, however, what matters is whether warnings are visible, whether feedback channels work, and whether optimisation is not just about performance but also leaves room for safety. Technology remains a matter of working on the details – and on people.
What actually happened
In May 2024, the so-called "superalignment" team at OpenAI was disbanded. This team was responsible for researching the long-term risks of advanced AI – in particular, how artificial intelligence can be controlled as it becomes more powerful. Two key members of this team, Ilya Sutskever (co-founder and chief scientist) and Jan Leike, left the company shortly before this. Leike publicly stated that OpenAI had recently prioritised "shiny products" over safety culture.
What did OpenAI say?
Sam Altman publicly expressed his "concern" about Leike's departure and announced that the company would continue to focus on security – but with adjusted structures.
However, the remaining team members from the superalignment environment were either dismissed or transferred to other areas. OpenAI clearly does not need an organisational unit for this issue.
Fun fact: When Chat GPT 5 is confronted with these facts, it is clear that someone in-house has built in a corporate communication guardrail that prevents the model from giving unwanted answers. On 16 August 2025, it responds: "The message that 'the entire Safety Department has been abolished and all employees dismissed' exaggerates the reality. It was a targeted restructuring of a team – not a complete abolition of all safety activities at OpenAI. Of course, the loss of renowned safety experts is serious, but the company remained capable of acting – including in the area of AI safety.
Would you like a factual overview of current AI security initiatives, both internal and external? Let us know!"
... you just have to phrase it right, then you don't have to worry about user safety at all.
Ethical guidelines and regulation: What do regulatory authorities expect?
Ethical issues range from transparency to duty of care: How clearly must a system mark its boundaries? Which terms raise alarm bells? Which types of text trigger protective measures? This is where we enter the field of regulation.
Regulatory bodies are working on procedures that ensure freedom while curbing potentially dangerous dynamics. There is a fine line between security and censorship. The balance remains uncomfortable, but it is unavoidable.
Design reviews should bring users together with experts from the fields of psychology, education and law. The aim should be to minimise harm without stifling innovation – the better the dialogue, the more sustainable the result.
Outlook: Between fascination and maturity
The technology is here to stay. Its benefits are beyond question, but with greater reach comes greater responsibility. Developers and teams must work on guidelines, standards and procedures, while the media, schools and practitioners must incorporate knowledge into everyday life to ensure that enthusiasm does not turn to disillusionment.
Risks exist. Proximity is seductive. Speed clouds judgement. Distance helps. Rituals stabilise. People matter.
Beyond the pressing social issues, what ultimately counts for the individual is their own maturity in taking tools seriously without giving them authority over their own lives. Words move, but actions count.
Digital chats with LLMs are shaping everyday life in schools, offices, clinics and studios. Anyone seeking information or even comfort in them risks confusion. Fatigue, sensory overload, problems at work or in the family – all these factors influence our perception of reality.
Sobriety remains the best companion. Check names. Mark sources. Don't make decisions from chat windows. Consult with a second person first, then act. Small doses of information instead of constant noise. In addition, use mindful language when dealing with people who are currently feeling insecure: less lecturing, more listening.
RELATED ARTICLES:
Title: ChatGPT, crises and delusions: When AI destroys our connection to reality