Understanding artificial intelligence
Understanding artificial intelligence: What Karakuri Ningyō tell us about humans, machines and the future
At first glance, Japan's old robot dolls look like nostalgic toys - but they tell a remarkable story about technology, intelligence and our relationship with machines.
This article is not just about historical automation, but about the cultural origins of a phenomenon that is now known as artificial intelligence. Anyone who wants to know how deeply rooted our desire to bring life into the machine is - and what this has to do with our image of humanity - will find a surprising mirror in the Karakuri Ningyō. This article is worthwhile for anyone who not only wants to use technology, but also wants to understand it - including impulses from current research in Germany and internationally.
🔹 What it's about:
This blog post looks at the historical Karakuri Ningyō - mechanical dolls from Japan - and shows how they have shaped the way we think about artificial intelligence, machines, and the human-machine relationship today. The focus is on the following key questions:
What do Karakuri reveal about our early fascination with living technology?
How do mechanical dolls differ from modern AI - and where are the parallels?
What role do cultural expectations, emotional acceptance and social contexts play?
How do studies, historical comparisons and ethical debates help us to understand today's challenges around research, copyright and responsibility?
A contribution for all those who see the history of technology not as a retrospective, but as a mirror for the future.
What are Karakuri Ningyō - and why are we looking at them today?
Karakuri Ningyō are skilfully crafted mechanical dolls from the Japanese Edo period (1603-1868). Without electronics, equipped only with springs, cogwheels and lever mechanisms, they were able to serve tea, shoot arrows or perform dances. Their movements were so precise and human-like that they were considered true technological marvels.
Why are we dealing with them today, in the age of artificial intelligence? Because they represent an early form of the human desire to breathe life into machines - a desire that has persisted for centuries. Current research, for example at universities in Germany or by publishers such as Springer, is increasingly recognising the extent to which cultural narratives shape technical development.
The Karakuri Ningyō help us to understand that technology is never just a function - it is always a reflection of our ideas of what it means to be human.
How human-like were these old machines?
Although Karakuri Ningyō did not possess intelligence in the modern sense, to many observers they appeared similar to living beings. Their movements were fluid, they bowed, maintained eye contact and acted in ritual contexts. Their form was deliberately anthropomorphic, i.e. modelled on humans.
This figure of the almost living being gave rise to feelings of amazement, closeness or even unease in many viewers - reactions that occur again today when we experience humanoid robots or chatbots that work with artificial intelligence.
This shows that it is not just about technical precision. It is also about cultural expectations that are deeply rooted in the collective memory - a realisation that has been confirmed several times by psychological studies in Germany and Japan.
What distinguishes mechanical dolls from modern artificial intelligence?
Mechanical dolls such as the Karakuri Ningyō were purely analogue: Movements were triggered by complex spring mechanisms. There was no adaptation to the environment or dialogue. The intelligence was therefore in the design - not in the behaviour.
Modern AI, on the other hand, is based on data-driven research, machine learning and partially self-optimising algorithms. It interprets inputs, changes outputs and reacts dynamically to new information. Nevertheless, a parallel remains: Both forms of technology play with the human-machine relationship.
One criticism of current AI is that its decisions are opaque - while Karakuri-Ningyō deliberately concealed its mechanics. In both cases, a kind of second reality is created in which technology appears to act autonomously.
Why are we fascinated by "living" technology?
The fascination with "animated" technology is not a new phenomenon - it is as old as the idea of breathing a soul into the inanimate. Even ancient myths, such as Taylor's book on machine fantasies in antiquity, reveal this lineage.
The Karakuri Ningyō are cultural evidence of this impulse. Springer articles on the history of technology also emphasise that people have been designing technology for centuries in such a way that it appears similar - human, familiar, emotionally legible.
This strategy not only has a psychological effect, but also a social one: it increases the acceptance of new technologies, reduces fear of contact and makes machines seem harmless. But this raises ethical questions.
Are there any studies on the emotional acceptance of human-like machines?
Yes - the emotional acceptance of machines has been analysed in numerous studies both in Germany and internationally. One example is the "AI and Emotion" project at the University of Augsburg, which was led by Prof Dr Schneider.
This showed that the more human a machine appears, the more it is emotionally evaluated - both positively and negatively. Artificial systems that communicate in a similar way to us trigger compassion, anger or confusion. An effect that has already been observed in karakuri.
This research suggests that our reactions to AI are strongly characterised by old cultural patterns - a finding that is also relevant for companies that use AI in customer contact.
What questions we still need to ask ourselves about AI
The Karakuri Ningyō raise very topical questions: What happens if we only simulate intelligence - is that enough for us to take responsibility? Or do we need new ethical criteria for artificial behaviour?
Society is still struggling with the definition of responsibility in an increasingly automated world. Researchers are discussing the so-called "black box" of modern AI - systems whose decisions are incomprehensible. The Karakuri were complex but transparent - today's AI is often not.
This second level of intransparency creates uncertainty - and shows that the debate about intelligence is not just technical, but deeply social.
What Karakuri show about the relationship between technology and society
Karakuri Ningyō were never just technical gadgets. They were used in ritual contexts, such as tea ceremonies or religious festivals. Their use had social significance, not just functional value.
The situation is similar today: AI is not a "neutral" tool - it is integrated into social systems, influences decisions and structures power relations. Anyone who develops or uses AI also helps to shape social processes.
As recent articles in the Springer series "Ethics in the digital society" show, there is an increasing demand to understand technology design as a cultural act - not just as a technological task.
Why the history of AI does not begin with computers
Many people think that the history of artificial intelligence began with Turing, computer chips and neural networks. But this is a misconception.
As historians emphasise, pre-forms of "thinking technology" have existed for a long time - for example in mythological tales, in Renaissance automata or in the Karakuri Ningyō. This history shows: The desire for intelligent technology is not a new invention - but a cultural perennial.
Taylor's analyses make it clear that these ideas have remained relatively similar over the centuries - even if the technical implementation has changed. The history of technology is always also the history of ideas.
What does this mean for research, rights and copyright?
When artificial systems generate texts, images or music, the question arises: Who owns the copyright? Who holds the rights when an algorithm becomes creatively active? This discussion is not a side note - it affects research, business and society in equal measure.
The mechanical dolls did not create content - but they already raised questions about authorship and control. Today, these questions are more urgent than ever.
Current debates, for example in the EU Parliament or at Springer conferences, show that existing copyright law is reaching its limits. Clarifying these issues is crucial for research, cultural discourse - and not least for democracy.
What we can learn from Japan, Augsburg and modern technology ethics
The journey from the Karakuri Ningyō to ChatGPT shows: We need staying power when we think about technology. Artificial intelligence is not an isolated phenomenon - it is part of a long tradition of technical projections.
Places like Augsburg, where research into AI is combined with philosophy and cultural studies, show that interdisciplinary thinking is necessary. In Germany, too, there is a growing realisation that pure functionality is not enough - reflection, responsibility and ethics are needed.
The Karakuri teach us that technology is always also a stage - for our fears, our hopes, our ideas about life. Those who take this stage seriously will understand AI better - and shape its future more consciously.
Key findings at a glance
Karakuri Ningyō are early examples of artificial life forms - without electronics.
They show how strongly cultural ideas characterise our understanding of technology.
Human reactions to machines are psychologically complex and emotional.
Modern AI must not only be technically responsible, but also socially responsible.
Questions about rights, copyright and responsibility are becoming increasingly urgent.
Research in Germany, for example in Augsburg or at Springer, contributes important perspectives.
The history of technology is also the history of ideas - and begins long before the digital age.
If you want to understand AI, you first have to understand what we are actually looking for in the human-machine relationship.
Comments
Due to technical limitations, comments containing commas cannot currently be displayed.
Please note that this comment section is intended for short comments. Longer comments will not be displayed. If you would like to submit a more detailed comment about this article, please send it to me via the contact form.