
DESCRIPTION:
AI systems such as ChatGPT and chatbots cannot develop consciousness. The misunderstanding surrounding artificial intelligence: what AI really is.
Can AI develop consciousness? Artificial intelligence and the artificial misunderstanding
Persistent myths circulate about artificial intelligence. The most widespread: AI systems could develop consciousness, and such consciousness would give them a reliable moral compass. This misunderstanding has consequences for public debate, regulation, and the relationships people build with digital systems. A factual assessment is long overdue.
What do we mean by consciousness, and where does the misunderstanding begin?
In philosophy and cognitive science, consciousness refers to the subjective quality of experience of a sentient being: the experience of something from an internal perspective. In 1974, Thomas Nagel coined the phrase ‘What is it like to be a bat?’ The question focuses on the quality of experience, on the ‘how’ of experiencing. This characteristic presupposes a sentient system.
AI systems based on statistical pattern processing do not possess such an internal perspective. The assumptions leading to the belief that AI could approach human-like consciousness are based on a confusion: equating competent output with experience. This equation is philosophically untenable and ranks among the most consequential myths in the discourse on AI.
What popular portrayals often convey is fiction. The difference between a convincing response and an experiencing counterpart is categorical, not gradual.
AI myths: What chatbots like ChatGPT actually do
Systems such as ChatGPT belong to the class of Large Language Models (LLMs). They are trained on vast amounts of data and, in doing so, learn to identify patterns in text sequences statistically. The core objective: to predict the next token. This process runs across billions of parameters in a ‘deep neural network’.
Chatbots produce coherent text because they draw on patterns in a dataset that represents human writing. The language model does not draw on its own world of experience. It interpolates between learned patterns. What appears to be an assessment or recommendation is the result of statistical probability calculations.
The myth that chatbots possess their own intentions or beliefs is one of the most widespread misconceptions about AI systems. ChatGPT does not think. It calculates likely text continuations using a powerful architecture.
How a language model works without human intelligence
An LLM is based on the Transformer architecture, which has dominated research in Artificial Intelligence (AI) since 2017. The model is trained on training data: textual datasets from books, articles and websites. During training, its weights are continuously updated using gradient descent, a machine learning technique.
The result is a system that operates with impressive linguistic complexity, yet is incapable of any cognitive performance in the sense of human intelligence. Artificial Intelligence and Cognitive Science make a precise distinction here: linguistic competence is not a sign of understanding. What a language model generates is statistically driven. There are neither intentions nor goals. Some predictions prove to be plausible.
The terms ‘intelligence’ and ‘cognitive’ sound human. This makes them misleading when applied to AI systems without distinction.
The Chinese Room: What philosophers say about artificial systems
In 1980, John Searle devised a thought experiment that illustrates the limits of artificial linguistic competence. A person in a room follows rules for responding to Chinese symbols and provides correct answers without understanding Chinese. The system behaves competently. Yet there is no understanding.
This philosopher advanced an argument that remains debated today in cognitive science and the philosophy of mind. LLMs bring precisely this scenario to life: they process symbols according to statistical rules and mimic linguistic understanding without actually grasping it. The output is plausible. The process remains devoid of access to meaning in the phenomenal sense.
The myth that artificial systems can grasp meaning is philosophically ill-founded. Science fiction has popularised it. Cognitive science takes a more nuanced view.
Can artificial intelligence develop genuine empathy or consciousness?
Empathy presupposes the ability to imagine another experiencing being and to share in their affective state. This ability is biologically embedded in social mammals and neurologically complex. By definition, a system without experience cannot develop consciousness.
What AI systems produce during interactions with users are phrases that sound empathetic. The recipient attributes such characteristics to the outputs. The system possesses no internal emotional perspective. It imitates linguistic patterns associated with empathy in human writing.
The misconception that artificial intelligence develops genuine empathy or is on the path to true consciousness has a psychological explanation: humans tend to follow social cues reflexively, even when their source is not an experiencing subject.
How AI systems are trained, and what emerges in the process
The training of a modern AI system takes place in several phases. First, the neural network is pre-trained on large amounts of data. This is followed by Reinforcement Learning from Human Feedback (RLHF): human evaluators rank the AI system's outputs, and the system is further trained and its algorithm refined based on these rankings.
This process does not generate values, judgment, or neurons in the biological sense. What emerges are behavioural tendencies. The model has been trained to generate outputs that have been assessed as helpful and harmless. Algorithms and weights do not replace biological experience.
The result is a powerful tool for language processing. It has not, however, acquired a moral compass. What appears to be ethical behaviour is a statistically conditioned pattern.
Why the brain is not a digital algorithm
The human brain operates biologically: using electrochemical signals, synaptic plasticity, hormonal modulation and a life context that shapes experience. The model of the human brain as a computer leads to false conclusions about AI.
The digital computing system of an LLM is efficient at processing text data. Nevertheless, it is not a brain. The complexity of the human brain’s neural processes is deeply embedded in biological processes. No algorithm, however highly complex its operation, replicates this process. Efficiency describes a technological property. Experience is something else.
This distinction is scientifically precise and has consequences: technological innovation in AI, impressive as it is, does not overcome this fundamental limit.
Human characteristics: what is attributed to AI systems
The cognitive science literature documents a robust finding: humans attribute human characteristics to social signals, even when the source is not human. This tendency is particularly pronounced in interactions with chatbots and has been replicated in studies since the ELIZA experiments of the 1960s.
What the public perceives as AI empathy or AI morality is often projection. The half-truth is this: systems produce human-sounding outputs. The misleading aspect of this is that human characteristics are inferred from them. This conclusion is not logically sound.
A powerful language model is not a moral system. The confusion between these two characteristics is one of the most persistent myths surrounding artificial intelligence.
AI governance: Who bears responsibility for AI decisions?
AI governance touches on the core problem: if an AI system causes harm, who is liable? The answer lies with developers, operators and regulators. AI systems are not agents. They are products of human decisions: regarding training data, architecture, monitoring and the creation of evaluation criteria.
The myth of AI conscience shifts this responsibility. Anyone who assumes that an ethically acting system is exempt from oversight is mistaken. AI governance requires transparent responsibilities and precise regulation. Ethically relevant decisions regarding the use of AI remain human decisions.
Narratives about artificial intelligence circulate that obscure this structure of responsibility. This is not an academic problem. It is a political one.
What AI really achieves, and where the limits lie
AI is a powerful tool. Language models provide impressive assistance with research, formulation, summarisation and structuring. What they cannot do: judge, take responsibility, or experience. The artificial system reproduces statistically probable outputs. It has no beliefs and no internal perspective.
The system reflects what is contained in its training data. Therein lies its benefit. Therein lies its limitation. Those who understand this difference use AI technology with realistic expectations.
The myths surrounding AI are not harmless misunderstandings. They shape regulation, trust and societal expectations. Awareness of these myths is the first step towards dealing objectively with what artificial intelligence actually is.
Summary of key findings
· AI systems are statistical language systems without subjective experiential quality. They cannot develop consciousness.
· The myth that AI possesses human-like consciousness stems from conflating competent output with experience.
· The Chinese Room (Searle, 1980): Syntactic competence does not imply semantics. LLMs mimic understanding without actually achieving it.
· What appears to be an AI conscience is statistically conditioned behaviour based on human evaluations during training.
· Human characteristics attributed to AI systems arise in the recipient, not in the system.
· AI governance requires clearly defined structures of human accountability. The myths surrounding AI consciousness obscure these structures.
· Chatbots such as ChatGPT are powerful tools. Empathy, morality and consciousness are not among their characteristics.
· A rational approach to artificial intelligence requires an understanding of these myths.
References
1. Anthropic, ‘Claude’s new constitution’ (22 January 2026). https://www.anthropic.com/news/claude-new-constitution
2. Humza Naveed et al., “A Comprehensive Overview of Large Language Models”. ACM Computing Surveys (2025). https://dl.acm.org/doi/10.1145/3744746
3. Shervin Minaee et al., “Large Language Models: A Survey” (2024). https://arxiv.org/abs/2402.06196
4. J. Li et al., “Can ‘consciousness’ be observed from large language model internal states?” AI Open (2025). https://www.sciencedirect.com/science/article/pii/S2949719125000391
5. David Cole, “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/chinese-room/
6. NDTV / AFP, “Is Claude Conscious? Anthropic CEO Dario Amodei Says Possibility Can’t Be Ruled Out” (6 March 2026). https://www.ndtv.com/world-news/is-claude-conscious-anthropic-ceo-dario-amodei-says-possibility-cant-be-ruled-out-11175771
7. Thomas Nagel, “What Is It Like to Be a Bat?” The Philosophical Review 83 (1974), pp. 435–450.
8. Aylin Caliskan et al., “Semantics derived automatically from language corpora contain human-like biases.” Science 356(6334) (2017), pp. 183–186.
Related: