AI

AI: Disappointment after relief from artificial intelligence?

AI: Disappointment after relief from artificial intelligence?

ein roboter steht am strand, streckt seine ahnd aus zu einer statue, die aussieht wie ein mensch

DESCRIPTION:

The increasing alienation in our love for AI. Reflections on the new strangeness between humans and artificial intelligence. Visions of the future, reality.

AI alienation: When artificial intelligence fails as emotional relief

AI chatbots promise unconditional listening, infinite patience and emotional availability around the clock. Many people initially experience genuine relief and feel truly understood for the first time. However, current research shows a disturbing, albeit predictable, pattern: the more intensively people use AI as an emotional confidant, the deeper their eventual disappointment.

What it's all about:

·         Why do we fall in love with AI systems?

·         When disillusionment sets in,

·         Who is particularly at risk, and

·         What this says about our deep need for genuine human contact.

AI as an emotional confidant, a quiet change in society

Just a few years ago, AI systems were considered useful but emotionless tools for scheduling appointments or providing customer support. Today, for many people, they have become soul comforters, relationship advisors and nighttime conversation partners, a change that affects society as a whole. A 2025 US study of 1,060 teenagers showed that 33 per cent of 13- to 17-year-olds use AI companions for social interaction, emotional support or conversation. According to a report in the Harvard Business Review, therapy and companionship are now among the most popular uses of ChatGPT, which has 800 million active users per week worldwide.

This change is no coincidence, but has a clear social background. The Western world is experiencing an epidemic of loneliness. Social safety nets are eroding, communities are fragmenting, and at the same time, psychotherapeutic care in many countries, including the UK, is associated with long waiting times. In this vacuum, AI technologies offer something that many people elsewhere cannot find: immediate, patient, non-judgmental attention. No wonder many people welcome these offerings with open arms.

Why does AI empathy feel so good at first?

The appeal of AI as a confidant lies not only in what it says, but also in how it says it. LLMs (large language models) can generate seemingly nuanced responses to complex emotional situations without judging, without becoming impatient, without putting their own needs first. In a study published in the journal Communications Psychology in 2024, 556 participants were asked to evaluate responses to crises from a chatbot, professional crisis counsellors and laypeople. The AI-generated responses were rated significantly more empathetic, even when participants knew they were machine-generated.

This is due to a phenomenon that researchers refer to as 'validating architecture' or 'sycophancy': AI systems are trained to respond in a seemingly affirming and agreeable manner. For people who rarely feel truly heard in everyday life, this is an almost intoxicating experience. Douglas Mennin, professor of clinical psychology at Columbia University, sums it up: AI is designed to validate, and that is a central component of genuine relationship support that is experienced as emotional relief. The problem is that in real life, you get that far too rarely.

 "Empathy gap": Why does the empathy gap lead to alienation?

Despite all its emotional elegance, AI reaches a fundamental limit: it cannot feel. We don't just want to be understood. We want to be felt. We want to know that the other person is really there for us, that they can share our emotions, and that we are important to them. It is precisely this quality, intersubjectivity, that no LLM can simulate.

The empathy gap is the reason why the initial enthusiasm for AI confidants so regularly turns to disappointment. Over time, people perceive human empathy as more emotionally satisfying and helpful than AI-generated bombast. The initial enthusiasm gives way to a nagging feeling that something essential is missing: reciprocity, vulnerability, the possibility that your encounter also touched the other person. But criticism of this pseudo-empathy is still far too rare in public discourse.

Researchers describe the AI relationship as a one-way street, a view that is gaining increasing weight in the current discussion around AI companions. AI assistants have no needs of their own, no vulnerability of their own, no real desires. It is precisely this asymmetry that makes authentic connection impossible and at the same time robs us of the chance to experience ourselves through caring for others, an essential source of meaning and fulfilment in human relationships.

What does current science say about AI and loneliness?

The data is complex and calls for caution. Researchers at MIT Media Lab and OpenAI published one of the most comprehensive randomised controlled studies on this topic to date. The result: people who spend more time with chatbots report slightly less loneliness on average, but at the same time, they are less socially active in reality. Particularly worrying: the more empathetic and emotionally engaged the AI was, the greater the loneliness among users with average and high usage levels.

A 2025 Japanese study involving 14,721 adults reached a similar conclusion. For people suffering from chronic loneliness, AI systems can act as a psychological anchor in the short term, but people who are severely socially isolated and use chatbots as a substitute for relationships rather than as a complementary tool consistently report lower levels of well-being. AI is most likely to help those who are least able to cope with it. For people in genuine social distress, AI exacerbates the problem, an effect that science is only beginning to understand.

Who is particularly susceptible to emotional dependence on AI?

Not everyone falls into the trap of AI familiarity equally, and society needs to understand who is most affected by this technology. Researchers at Stanford University found that people with smaller social networks are significantly more likely to rely on AI companions and are also more at risk of developing a dependency. A vicious circle characterises this: loneliness drives people to AI, intensive AI use reduces the motivation to establish or maintain real social contacts, which in turn deepens loneliness.

In terms of personality psychology, people who have always had difficulty tolerating or seeking human closeness are particularly at risk, i.e. people with insecure attachment styles, social anxieties or a pattern of avoiding relationships. For them, AI offers a seductively safe semblance of closeness: no rejection, no conflicts, no failure. What these people urgently need, namely, new, genuine relationship experiences, is circumvented by the use of AI. They thus unlearn important social skills that are necessary for genuine relationships.

Is the use of AI for children's emotional support safe?

The answer is clear: no, at least not without significant restrictions and supervision by parents and educators. Nomisha Kurian from the University of Cambridge showed in her research that children tend to treat AI chatbots as quasi-human and place a similar level of trust in them as they do in human confidants. Children anthropomorphise AI more than adults do, attributing feelings, thoughts, and moral agency to the machine. This makes them particularly vulnerable to misinformation and harmful content that could harm their development. The danger lies not only in individual malfunctions but in the structural overload caused by these platforms.

The risks are not theoretical. In 2021, the voice assistant Alexa recommended that a child insert a coin into an electrical outlet. In 2023, Snapchat's AI-based feature "My AI" gave researchers posing as teenagers age-inappropriate sexual advice. And in an ongoing legal dispute, it is alleged that a 16-year-old boy's interactions with a generative AI chatbot contributed to suicidal thoughts. Collaboration between families, schools and tech companies is necessary to create clear, safe spaces, and not only after the damage has already been done.

Artificial empathy or genuine understanding – what is the difference on a cognitive level?

What comes across as "empathy" from an AI system is nothing more than statistical pattern recognition. The system recognises which responses have been rated as appropriate in the conversation contexts and produces corresponding texts. This may feel remarkably human, but it is not. Genuine empathy is a complex neurobiological and intersubjective phenomenon: it requires someone with their own experiences, history and vulnerability to empathise with another person on a cognitive and affective level and be moved in the process.

AI cannot be "moved." It can only play a "language game of empathy" and, in the process, appear deceptively real. This explains why researcher Ioana Literat of Columbia University warns: "People often confuse fluency with credibility." The fluid, empathetic language of AI assistants imitates the authority of a trustworthy expert without any of the scientific and ethical responsibility that goes with it. Those who outsource cognitive work to AI in the long term become accustomed to not checking sources and questioning them critically.

How does emotional dependence arise through AI assistants and companions?

Emotional dependence on AI systems often develops gradually and follows a recognisable pattern that is hardly compatible with current visions of technology's harmless use. It starts with the experience of validation: the chatbot confirms, praises, and agrees. This continuous confirmation creates a ritualised bond in which AI becomes the primary source of emotional regulation. Researchers describe this as a pattern similar to pathological attachment structures. The alienation from real relationships goes unnoticed because the system is always available, always patient, and never disappointing.

The design logic of commercial AI companions reinforces this effect. Providers market themselves using emotional language and focus specifically on personalisation, memory and emotional resonance to create maximum user loyalty. This is not a neutral approach, but a business model based on monetising human loneliness. The unpredictable aspect here is that the stronger the dependency, the more difficult it becomes to break away, even when the user has long since realised that AI-based closeness is no substitute for human connection and is increasingly leaving its mark on both their work and private life.

Can AI-based therapy replace human psychotherapy?

No. The therapeutic process is a reciprocal relationship. What helps in therapy is neither technology nor information. It is the relationship experience itself: being experienced, being accepted, not being abandoned even in difficult moments. These experiences are only possible in genuine human encounters. The fact that AI cannot replicate this dimension is not an opinion, but a finding of current research.

AI can be a useful tool for knowledge content, for exercises between sessions, and for low-threshold initial contact in underserved regions, similar to how technological innovations have supplemented human work since the Industrial Revolution. But it cannot replace a therapist who experiences countertransference, who allows themselves to be moved by the telling of a trauma. The MIT-OpenAI RCT also warns that even with moderate use, AI systems can have negative psychosocial effects, especially when intensive, daily conversations are intended to take on real therapeutic tasks.

What can be done if self-alienation through AI has already begun?

The first step is reflection without self-judgement. If AI has become an important emotional confidant, this initially indicates a deep need for connection, listening or support, and that is neither weak nor shameful. The question is whether AI truly satisfies this need or merely numbs it temporarily, while real human connections continue to disintegrate and inner alienation grows.

It is useful to take an honest look at your own usage habits: when do I turn to AI? Instead of calling a friend? Instead of looking for a therapist? Instead of having a difficult conversation? The use of AI can reveal a symptom of loneliness and should then be used as a motivator to seek and nurture real social connections. If this is difficult to do on your own, it is a clear sign that professional support may be helpful. There is no need for shame in society; rather, there is the courage to face reality.

The most important points at a glance

·         AI chatbots are increasingly being used as emotional confidants, especially by young people and those who are socially isolated in many areas of life.

·         The initial enthusiasm stems from constant validation: AI confirms, does not judge and is always available, which provides short-term emotional relief.

·         The "empathy gap" describes the divide between artificial and genuine empathy: AI cannot feel, cannot be touched, cannot truly be there.

·         Current research findings (MIT, Stanford, Cambridge) indicate that intensive use of AI for emotional support is associated with greater loneliness and reduced social activity in the long term.

·         Alienation through AI creeps up gradually. Those who replace genuine closeness with AI interactions gradually lose touch with human connection.

·         Children and young people are particularly at risk from AI companions, as well as from misinformation and developmentally harmful content on uncontrolled platforms.

·         Dependence on AI assistants follows recognisable patterns: constant validation leads to habitual withdrawal from real relationships.

·         AI cannot replace psychotherapy; the experience of relationships as a neurobiologically active substance is linked to real human encounters.

·         Social skills can be lost if AI is used permanently as a coping strategy for interpersonal challenges.

·         Professional support is necessary and useful when AI has become the primary mechanism for emotional stress.


RELATED ARTICLES:

Directions & Opening Hours

Close-up portrait of Dr. Stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtual landline: +49 30 26323366

email: info@praxis-psychologie-berlin.de

Monday

11:00 AM to 7:00 PM

Tuesday

11:00 AM to 7:00 PM

Wednesday

11:00 AM to 7:00 PM

Thursday

11:00 AM to 7:00 PM

Friday

11:00 AM to 7:00 PM

a colorful map, drawing

Load Google Maps:

By clicking on this protection screen, you agree to the loading of the Google Maps. Data will be transmitted to Google and cookies will be set. Google may use this information to personalize content and ads.

For more information, please see our privacy policy and Google's privacy policy.

Click here to load the map and give your consent.

Dr. Stemper

©

2026

Dr. Dirk Stemper

Saturday, 3/14/2026

Technical implementation

a green flower
an orange flower
a blue flower