Artificial Empathy, Imitation and Mimesis

Paul DUMOUCHEL* & Luisa DAMIANO**

*Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University
**University of Bergamo

I am trying to create a useless robot.

Takanori Shibata

The goal of this short presentation is to introduce a new research project (funded by IMITATIO) in which Dr. Luisa Damiano and I are recently engaged. In many ways it is a strange enterprise as it aims to relate two relatively distant fields of research on the basis of a common object, imitation. What makes this enterprise questionable is that in both fields of research, the basic concept of imitation (or mimesis) is not clearly defined inasmuch as there are ongoing discussions about its exact meaning, its extension and how this behavior can be implemented. Therefore there is a danger that we will be seen as simply adding to the confusion, however, it is our hope that more light than heat will result from bringing together these two quite different domains.

First, before coming to the difficulties of the enterprise, a few words concerning its rationale are necessary. “Artificial empathy” is a term we coined to describe a rapidly growing field of research whose central characteristic is the development and understanding of social communicative behavior in artificial agents, either robots or virtual agents, either in “machine – machine” relations or in “human – machine” settings. An important aspect of this type of research is its interdisciplinary character. Conferences, as well as major publications in this domain, often bring together specialists from many different fields, robotics, neuroscience, psychology, education, artists (especially from the performing arts), cognitive science, philosophy and anthropology.[1] This absence of well defined borders in fact accurately reflects the nature of both the cognitive and technological projects involved in artificial empathy. On the technological side it is driven by the desire to develop artificial agents that can successfully interact with humans in social situations. This goal requires an understanding of social communication and of human affective reactions, topics which have been central to research in psychology, anthropology, education and developmental psychology. Furthermore artists have been engaged for a long time in eliciting emotional reactions through interaction between humans and at least partially artificial agents. On the cognitive side one of the central motivation of this type of research, is the belief that we will understand emotions, social communication, and the mechanisms responsible for imitation better to the extent that we can implement them in artificial agents. In other, words robots and virtual agents are seen as forms of experimentation and as scientific instruments that allow us to test and develop theories of emotion, of social communication and of cultural transmission. Again such a research is best carried out in close collaboration with representatives of disciplines which have long been engaged in the study of these objects: cognitive science and neuroscience, neurology, psychology, anthropology, ethology, philosophy. Therefore the interdisciplinary dimension of artificial empathy is not something that is added afterwards to enrich the project, but a necessary part of the enterprise given its technological goals and cognitive objectives. It is so to speak ‘by definition’ that this field of research involves many disciplines.

Mimetic theory[2] is a general theory of culture which argues that the imitative ability of humans plays a fundamental role in the creation and evolution of culture, as well as in the cognitive development of individuals. It also argues that mimesis is fundamentally linked to human violence (an issue that is conspicuously absent from artificial empathy[3]) and that human culture should be understood to a large extent as a means of protection against violence. It is also a trans-disciplinary approach in its claims that mimesis, human imitative behavior, has major consequences in many areas i.e. religious practices, social institutions, politics, literary creation, war, economic development; areas that are usually considered as the exclusive domains of different and independent disciplines. Mimetic theory is thus an “imperialist” theory that claims to have explanatory power in what are usually considered the domains of other disciplines. What brings together these two fields of research is the extensive place they give to imitation in the explanation of social behavior. It is also the ambition of making contribution to fields of research that are not originally their own.

The major difficulty which relating these two approaches faces does not lie in the objects of research as such, many of which – imitation, social communication, cultural transmission, empathy and the study of the ‘psychological states’ of imitating agents – are common to both. Rather it lies in the different meaning of ‘imitation’ in the two approaches. However, as we will now see, this prima faciae difficulty actually constitutes one of the major rationales for this research project.

About fifteen years ago, researchers in primatology and cognitive ethology in particular began distinguishing between four types of behavior which we generally tend to associate under the name of imitation: stimulus enhancement, emulation, mimicry and “true imitation”. All of these behaviors they argued constitute different forms of social learning, in the sense that they are ways in which an organism can learn through observing the behavior of another. At first sight they may all look like imitation, but, according to them, they actually rest on different cognitive mechanism.[4] “Stimulus enhancement” is a form of social learning where the fact that one animal manifests interest in a given objects makes that object more prominent or salient to other animals who are observing the first. As a consequence, it becomes (statistically) more likely that they will interact with the object and to learn from that interaction. Note that in this case it is not directly from the first animal that the second learns, but from interacting with the object. This behavior it is argued is not a form of imitation, because the second organism does not reproduce the exact behavior of the first, rather it simply gains interest in the object from observing the first organism’s behavior. Learning therefore essentially takes place through the ‘autonomous’ interaction between the second organism and the target object. “Emulation” happens when an organism discovers affordances of objects from the observation of the behavior of others. Again, as is the case with stimulus enhancement, what is involved in emulation is not properly imitation rather what happens is that through its observation of the behavior of a first organism, a second one discovers, for example, that a milk bottle can be opened or a door can be unlocked. However in its later attempts at opening the bottle or unlocking the door the second organism does not exactly replicate or copy the behavior it observed, to the contrary it will often resorts to different strategies to obtain the same end. “Mimicry”, which is sometimes also referred to as “slavish imitation”, is when an organism blindly copies the actions of another without understanding either the goal or purpose of that action. This, it is argued, is not true imitation, because it does not involve any learning. “True imitation” only occurs when one organism reproduces the action of another in a context where it understands the goal of that action and recognizes that the action is an efficient way to reach that goal. Many proponents of these distinctions have argued that true imitation (if it actually exists) among animals is extremely rare and that even among humans many forms of behavior which are commonly viewed as imitation are best understood as example of either emulation or of stimulus enhancement.

In psychology, primatology and cognitive sciences, the debate concerning the value of this classification and the relative importance of imitation in human and animal behavior in relation to these other forms of social knowledge is still raging. Interestingly enough, the artificial empathy community has mostly side-stepped the debate. Contrary to what the idea of “true imitation” seems to suggests, and perhaps because of the emphasis on implementing imitative behavior in artificial agents, the general consensus has been, first, that “producing a behavior which is similar to another does not necessarily require a semantic representation of the action to be performed, nor to stem from an intention to imitate.”[5] Second it is thought that the above classification of social learning tends to hide the continuity that exists between these different types of matching behaviors.[6] Finally, and most importantly, this classification which aims to reflects the level of cognitive investment in different forms of social learning is essentially oblivious of the social communicative dimension of imitation which is fundamental to the artificial empathy enterprise.

One dimension of imitation is that of a learning mechanism. It is a way of learning about the world and of resolving difficulties through reproducing the successful behavior of other organisms. The distinctions between stimulus enhancement, emulation, mimicry and true imitations seek to identify different modes of social learning and to distinguish them in relation to the level of cognition which they require. The other dimension of imitation is social communicative. Here “individuals copy the actions of others as a way to relate with each other: instead of being a means of obtaining a tangible resource, imitation is a means towards the social end of engaging in interaction.”[7] Imitation understood in this way is a means of relating agents to each other; it also constitutes a skill that is essential for successful social interaction. Therefore this dimension of imitation is fundamental for the project of building socially competent artificial agents. However, from this point of view, when what is central is the social communicative dimension of the behavior, stimulus enhancement, emulation, mimicry and imitation seem closely related rather than discontinuous.

Similarly, mimesis focuses essentially on the social consequences of imitation and gives a central place to types of behavior which in many ways resemble more emulation and stimulus enhancement, than what is currently defined as “true imitation”. “Appropriative mimesis” is the imitation of appropriative behavior in its most simple form of the gestures others make to grasp, seize or take objects. In both emulation and stimulus enhancement, it is assumed that the second animal adopts an observational pose. It is assumed that the second organism embraces the third party’s point of view and refrains from interacting with the object or with another similar object, until the first animal has finished its ‘demonstration’.[8] When the object of common interest is neither food nor a potential mate, this may well be what often happens in most of the animal world. Mimetic theory argues however that this is not the case among humans. When one person gives signs of interest in an object it is often enough for another person to immediately proceed to try to interact with the same object. Rather than maintaining a third person, observational point of view, the second individual tries to interact with the object and in the process to take over the place of the first. This may not be an optimal learning strategy, and it constitutes, as Girard argues, an inexhaustible source of conflicts. Parents and educators are well aware of this propensity in young children, and an important part of early socialization consists in bringing them to repress this immediate reaction and to defer the ‘satisfaction of desire’. “Wait for your turn”, “You can have it later”, “No, this is not yours” are, in a certain stage of children’s education, some of the adults’ most commonly used sentences. Stimulus enhancement then could correspond to a weaker form of appropriative mimesis or indicate the presence in certain animals of an inhibiting mechanism that is absent or weaker among humans.

As mentioned earlier, researchers who design and experiment with social robots do not simply consider that they are doing applied science. They view their artifacts and artificial agents as scientific instruments, as ways of discovering, the nature of learning, of imitation, or of social attachment. They do not see themselves as only engaged into creating new and better technology; they also construe their enterprise as testing theories and discovering the nature of social interactions. They are engaged in a process of discovery and they argue that we will know what imitation is when we can make a robot that can imitate. Thus rather than trying to make a robot that applies this or that theory of imitation, their goal is of discovering by doing, of finding out what imitation is by making a robot who can imitate.

***

When still in the early stages of developing Paro – which since became the most widely used social robot in the world – Takanori Shibata was asked: “what is your next project?” he would answer “I am trying to create a useless robot.” For a social robot unlike a dishwashing machine, an autonomous vacuum cleaner or lawnmower does not serve any particular purpose. To the contrary, to be a social being is to be able to adapt to a wide range of different situations and to engage in numerous different activities. Social beings do not serve any particular purpose and can fulfill at different times many different functions. By definition, a social robot cannot be a robot that is enslaved to any particular role or function. More generally, and deeply, sociality does not have any purpose. It constitutes the necessary background or rather the condition (in the sense in which we speak of the ‘human condition’) out of which all human purposes arise, and relative to which use can be determined and utility measured. Social beings are purposeful beings, and though they may be reduced or condemned to repeatedly fulfill the same function their social dimension rests on their ability to ‘transcend’ the actual role or the function to which others tend to restrict them.[9] This ability is precisely what constitutes the great advantage of slaves, servants or human workers in general over machines. There is no artificial creature that can cook diner, mend jeans, vacuum the house, drive the kids to school, tend a garden of root vegetable, hunt pecaris in the jungle, decorate Christmas trees, play the flute, phone home to say it will be late, etc. The problem is not just technical – even though the technical difficulties involved in fabricating an artificial agent that could do all these things are immense – the issue is primarily ‘social’ or relational. Inventing an instrument that could fulfill only one these functions is essentially a technical problem, but creating an artificial creature that could switch from one to the other is a completely different issue. Not only or primarily because of the technical capacities that this would require, but essentially because the list given above is incomplete, more precisely it is open ended, infinite and indeterminate in the sense that it is impossible to know in advance what is the next element (or performance) that will appear on the list. Creating a social being is to create an open ended creature that can adapt to new unpredictable circumstances. Of course these new circumstances are not just anything; the area in which they may arise is not perfectly unlimited, rather they are restricted to the ill defined and fuzzy domain of things that human beings can do. This is the domain in which a social artificial agent must succeed to adapt. And to adapt in such a domain is not only a question of what you can do or learn; it is also a question of being accepted by others who already share and occupy this domain.

How do you resolve this problem? How do you create a creature that does not have any particular purposes, but can learn to fulfill many different ones, given that you do not know and cannot know in advance exactly what these purposes will be? Furthermore, how do you create a creature that will credibly interact with human agents, in the sense that it will always seem to have an existence, a ‘personality’ that extends beyond whatever function it is presently fulfilling? In robotics the answer to these questions can be summarized by two words: imitating and cheating. Social robots are a mixture of make believe and of human imitation. The two answers of course are not entirely separate. On the one hand, cheating, make believe, often is just the lower end of imitating a certain behavior, in the sense of an imperfect and incomplete imitation. On the other hand, there is a point where pretending to have a certain capacity is so convincing that it becomes indiscernible from having it, then one begins to wonder where the difference lies.

I now want to look a little more closely into these two characteristics of social robots, imitation and cheating, with the help of two examples: Paro developed by Professor Takanori Shibata and Geminoid created by Professor Hiroshi Ishiguro. These are two very different realizations, but both are in their own way social robots.

I Paro

Paro is defined by its inventor as a “mental assist” robot. It was designed to physically interact with human beings and has the appearance of a baby harp seal and weighs 2.8 kilograms. It is not a mobile robot. It needs to be carried and cannot go by itself from place to place. However it can move its rear and front fins. It can blink it eyes, raise its head and it can cry.[10] Mostly Paro reacts to its own name (and can learn its new name if it has been given one). When called it will raise it head and turn it in the direction from which the sound comes. It will do the same when it hears a loud noise. It has sensors that allow it to know when it is being touched and how it is being handled, for example caressed rather than hit, and it will react differently depending on the nature of the interaction. It is also designed in such a way that even if it has a finite number of basic behaviors the number of emerging behaviors of Paro in response to being handled is properly infinite.[11]

Paro is a very good looking little animal-like creature that is covered with hand crafted artificial white fur. Most people want to touch and hold it as soon as they see it and they rapidly become attached to Paro because of how it ‘spontaneously’ reacts when it is being handled. Given its absence of motility it is always available for interaction. It is also less in danger of hurting the people it interacts with and unlike a dog or a cat it cannot run away, spill a glass of water, break a precious vase or sharpen its claws on furniture. Finally it is very robust and can be handled by many different persons, relatively roughly without breaking.

Paro is mostly used in nursing homes and hospital as a substitute for pet therapy. Apart from the advantages already mentioned in the previous paragraph, unlike an animal its artificial fur is antiseptic, it does not carry any lice or germs, and Paro does not need to be toilet trained or fed. You simply recharge it periodically by connecting it to household electrical current. Amusingly enough the socket at the end of the wire that goes into its mouth has the shape of a pacifier! Finally, it does not become stressed by being handled too frequently and does not develop a jittery unstable character. It promises the advantages of pet therapy without the inconvenient. Paro interacts mostly with old people and young children, but it is also sometimes (mostly in Japan) bought by individuals or couples as a pet companion. Studies have shown that interacting with Paro has a significant positive effect on the mental health, both the cognitive ability and the emotional reactions, of elderly people in nursing homes. It also has a positive influence on the number and the quality of social interactions among people in nursing homes where Paro is present. It is furthermore effective in bettering the mood and reducing the incidence of depression among young children who have to stay in hospitals for long periods of time. In a sense of course it could be argued that Paro does serve a purpose: that of pet therapy. However, it serves that purpose by not doing anything in particular and by being available for repeated interactions, interactions which in themselves do not have any goal or purposes, other than that of being social interactions, so that the purpose or function which Paro serves is accidental or incidental.[12] One could argue that this is precisely why it is so successful. That is why people can interact with it as a companion rather than as a tool or instrument whose value and utility is relative to a specific task. And why people do not get bored with it. Unlike many other social robots which turn out to be little more than sophisticated toys, people do not lose interest in Paro and even after many months or more than a year its beneficial social effects persist.

The first quality of Paro is perhaps that it is in the shape of a baby seal. This is a shape that is familiar (and cute) to most people. However, no one (or just about) has ever regularly interacted with a baby seal. In consequence nobody has any particular expectation as to what constitutes a normal behavior or reaction on the part of a baby seal. If Paro had the shape of a dog or cat we would have a basis of comparison to judge if its behavior was ‘natural’. Furthermore a dog or cat or rabbit that cannot move is not credible; however making a four legged robot that can move around the way an animal does is an immense technical challenge. Paro‘s lack of motility constitutes a double advantage. First it makes it easier and safer to use in the context of hospitals and nursing homes where it has found an important niche. Second, it simplifies greatly the technical issues related in making it a credible imitation of a baby seal. Young seals move very little and only slowly on land. Paro was made to resemble a baby seal as closely as possible. Prof. Shibata based his robots appearance not only on pictures and films but also spent time directly observing seals in their natural habitat in the Canadian arctic. There is however in this imitation, this attempt to faithfully reproduce the appearance of a real baby harp seal, a large part of make-believe that plays an important role in the robot’s success. Paro seems much more natural than it really is because we have not point of reference, because nobody has ever had a domesticated baby seal as a household pet. This makes the ‘suspension of disbelief’ much easier. The acceptance of Paro as a credible ‘animal’ provides a context in which it can be seen to imitate or reproduce some normal or common behaviors. Like an animal or a young child it “pays attention” when it hears a loud noise, it “responds” to its name and “gives signs of contentment” when caressed and “of displeasure” when treated roughly. Furthermore its behavior is not entirely predictable. Within the very limited parameters of what it can do, Paro does not react the same way when handled by different persons or when handled at different times by the same person. In consequence, Paro will at time give the impression of being “strange today” or of being “happy”. These variations will be interpreted as indications that he has different “moods” or “preferences” or a “character”. Of course Paro has no such mental states; all it has in its architecture are two hierarchically organized layers of processes that generate its different forms of behavior. In particular the “behavior generation layer… generates control references for each actuator to perform the determined behavior. The control reference depends on the magnitude of the internal states and their variation. For example, parameters can change the speed of movements or the number of instances of the same behavior. Therefore although the number of basic patterns is finite, the number of emerging behavior is infinite because of the varying number of parameters. This creates life-like behavior.”[13] Paro, in a very limited domain of action, imitates life-like behavior, yet this is only make-believe, or is it? Imitating life-like behavior is imitating something of which no token constitutes a model, though each constitutes an instance. This is precisely what Paro does; it proposes sequences of action none of which a model of all others. Is this though imitation and in what sense?

If it is imitation it is imitation of something that is very abstract, rather than of any specific behavior. In fact, Paro does not do very much and, it particular, there is one class of actions which it never performs: all actions that involve any type of object. Its behavior is limited to interactions with other agents. It never interacts with any thing that is not an agent. Paro never bites a ball or runs after it. It is not even interested in itself; Paro does not scratch itself or lick its fur. What it does is to react to your voice, to move its tail and fins when you caress it, but never reacts to any thing in the world. That is even true of the motion of its head in the direction of a loud noise, because it does not lead to any other action directed towards that source only has the value of a signal to and complicity with other hearers. There is no world between Paro and us, no possible world in which it could have an interest. Because of that there is no behavior of Paro that cannot be interpreted as directed towards other agents. Paro is a social creature with a vengeance!

This to some extent is paradoxical given that Paro‘s ability to facilitate social interactions among pensioners of nursing homes comes from the fact the it constitutes an object between them, rather than an agent. Paro does not participate in the conversation of the group of which it is the focus as another partner or interlocutor, but as a common center of interest, as something we talk about, rather than someone we talk to, a pet that two or more persons can caress simultaneously. Paro facilitates conversation and social exchanges by standing in between agents rather than among them; by giving them something to talk and to fuss about which is not one of them. Paro reacts as an agent on a one to one basis, but it is an agent that is oblivious to group activity and to everything that is about it rather than to it. In consequence there is a dimension of sociality which Paro is clearly lacking: the ability to perceive itself as a third party, to resist the objectification that often comes with that position or to choose that mode of relation as a way of being with others.[14] Paradoxically Paro is unable to relate to others precisely in the way his presence allows others to interact among each others. There is no object that can intervene to mediate its relation to others. However it can probably be argued that these limitations of Paro are precisely the key to its success.

II Geminoid

Geminoid which was created by Professor Hiroshi Ishiguro constitutes a completely different attempt at making a social robot. As its name suggests, Geminoid is a double; it is in fact the exact double of Ishiguro. Same height, same skin, eyes and hair color, same feature, same way of dressing, Geminoid is a synthetic Ishiguro that can talk and move a little. Like Paro, Geminoid is a mobile, but not a motile robot. Like Prometheus, the gods that rule over him have condemned Geminoid to remain attached for the rest of his existence, not to a rock of a Caucasian mountain, but on a humble chair, chained by the tubes and wires that bring him intelligence and the air that drives his pneumatic actuators. Unlike Paro, Geminoid is not an autonomous robot. He neither incorporates his own power source, nor the intelligence that makes him social. In fact, Geminoid requires an operator, a human homunculus sitting in another room from where he or she watches, on a computer screen, the world through the eyes (cameras) of the robot. Geminoid is a mask, a communication interface. What kind of social robot is it then if it is not autonomous, and if conversing with him ultimately is conversing with the person who operates him from a more or less distant room? In what way is it more than a highly expensive puppet? The answer is that Geminoid is essentially a scientific instrument, a tool to pursue a specific kind of scientific inquiry, a help in answering certain questions.

What conditions must an artificial agent satisfy in order to be a convincing social partner? That is the central question Geminoid is designed to help answer. What is aimed at is not just a minimal social partner, like Paros, but a full-fledged interlocutor that can be considered as an equal. The strategy adopted here in trying to answer this question is: perfect imitation. Gemenoid is as a perfect (or at least as perfect as possible) a copy of the outward appearance of a human being. Because it is the double of a real existing human being, rather than a sheer plastic creation, like the sculpture to the unknown fallen soldier, or the statue of a Greek god, whose features do not correspond to anyone in particular, there is a sense in which Geminoid has to live up to the highest possible standard of comparison, a living human being that can be identified and with whom we can interact. He (it?) has to be understood in the context of what Professor Ishiguro defines as the “Total Turing Test”. Recall the Turing Test invented by the British mathematician Alan Turing as a procedure to determine if a machine can think. It is based on a parlor game where a person by asking indirect questions, which must be answered truthfully, to a man and a woman hidden in a different room tries to determine which one is the man or the woman. Turing argued that if one of the two persons hiding could be replaced by a computer without the questioner noticing the difference in the answers he receives, in other words if the computer could fool the questioner in believing it was either a man or a women, then we could say that that machine can think. According to Turing then, implicitly at least, whether or not a machine can think does not correspond to any of its intrinsic characteristics, but depends on the judgment other thinkers have of its performance. Hence the idea of a Total Turing Test which is a generalization of the original test and whose goal is to determine the extent to which an android has a human intelligence, that is an embodied and social intelligence. The difference here is that for Turing intelligence essentially corresponded to an intellectual ability: the capacity to answer appropriately searching questions in a context where the questioner does directly interact with its interlocutor, but only with the answer it provides. Expressed in the language of analytic philosophy, intelligence is identified here with propositional content in a given discursive context. To the contrary, intelligence is viewed by Ishiguro and by designers of social robots as a social phenomenon distributed among interacting agents. That is why according to Ishiguro, “if we can build a humanlike robot or android that is accepted as a human by people, then it means that the robot has humanlike intelligence.”[15] Such is the Total Turing Test, and Geminoid must be viewed as a step in that direction.

Technically Geminoid is a special kind of communication interface. Ideally it should allow Professor Ishiguro to give a class, or to sit in a meeting at the ATR laboratories in Kyoto while being in front of his computer in his office at the University of Osaka. Such an event could be understood as a kind of three dimensional physically embodied teleconference, where the caller sees where he is not through the cameras of Geminoid, while simultaneously controlling the movement of the robot in the space from which he is absent, in order to create the impression of his real presence. There are in fact two issues involved in such successful communication: presence and conviction. There is no doubt that to some extent Geminoid is physically present. When in a room with it, at least at first, he is very hard to ignore. In fact most people’s reaction when first encountering him (that was my own) is one of slight repulsion and anxiety. Geminoid is too real, to human-like to be dismissed like a toy or even like a mere artificial creature, and yet he clearly is not human. There is in consequence something strangely troubling about him.

In 1970 the Japanese robotist Mori propose the hypothesis of the ‘uncanny valley’ (bukimi no tani 不気味の谷). Mori argued that as robots became more human-like in form and ability, interacting with them would become easy and natural, so that the ease of interaction and familiarity with robots could be described as a curve that rises linearly as robots become more and more human-like. However, he added, there will be a point where robots will be nearly perfectly human-like, but yet with something missing. At that point interacting with one of those false humans will be a bit like talking to a “moving corpse”. It will engender a strong impression of eeriness and unfamiliarity, in consequence communicating with such artificial agents will be much more difficult and unnatural than interacting with humanoid robots whose outward appearance clearly says “I am a robot”. Such is the uncanny valley. Geminoid can be understood as an attempt to cross the uncanny valley. How do you create an artificial agent that has real and convincing presence as a partner in our social conversation? Where do you start from?

First, you eliminate all problems related to the robot understanding what it is being told or asked, and all problems related to knowing the appropriate social code and the right type of reaction in a given circumstance. Geminoid is controlled by a human operator which ensures that it has the appropriate kind of intellectual capacities and social knowledge. Second you eliminate all problems relative to the robot’s external appearance. It is the prefect double of a human person, no one should complain that it looks strange and different or that it has a bolt where its nose should be. Now you can ask: what is missing? In what way are Geminoid‘s movements or reaction’s unnatural or not convincing? How must you program and position the actuators so that when the arm moves, not only is the motion human-like and natural, but also so that the way the skin is deformed as the arm moves is similar to the way it is deformed by human muscles contracting?

What is missing then? Does Geminoid succeed in crossing the uncanny valley? The answer I think is clearly: no.[16] Why? It is essentially, because Geminoid does not move enough. It is not just a question of basic physiological motion, the fact that a human’s chest moves as he or she breathes, that our eyes blink all the time, our nostril dilate, pupils and eyes change shape as we look at something far or close and depending on the quantity of light in the environment. Rather, it is that humans, unlike Geminoid cannot stand still. They move all the time, scratch themselves, move their shoulder, turn their head, cross this leg then that, shift their weight from left to right or front to back. These, and an innumerable quantity of others movement are mostly unconscious, we are unaware of them, and we are mostly unaware of similar movements on the part of our interlocutors. Yet, even if we are unaware of them, they form a fundamental part of the experience of being in the presence of another human. Geminoid does not move except when it engages in an explicit motion, among us explicit motions arise out of a continuum of movements that only ends with death. These movements further play a fundamental role inasmuch as they are not simply random. When two individuals are in presence of each other, their body motions tend to become coordinated. Among us, there is a fundamental mimesis that takes place at the level of these simple spontaneous and mostly unconscious motions. If the person you are talking to touches her face the probability that you will touch your face in the next minute goes up dramatically. This basic bodily mimesis, this physical conversation that is permanently ongoing between interlocutors is part of the way in which we collectively determine each other’s intentions towards each other. This Geminoid does not do. It is not part of this exchange.

Notes

  • [1] See for example : L. Canamero & R. Aylett (eds.) Animating Expressive Characters for Social Interaction, Amsterdam: John Benjamins Publishing Company (2008); C.L. Nehaniv & K. Dautehhahn (eds.) Imitation and Social Learning in Robots, Humans and Animals, Cambridge University Press (2007); J.-M. Fellous & M. Arbib (eds.) Who Needs Emotions? The Brain Meets the Robot, Oxford University Press (2005); R. Trappl, P. Petta & S. Pays (eds.) Emotions in Humans and Artifacts, Cambridge, Mass.: MIT Press (2002).
  • [2] First developed by René Girard in a series of book; see in particular R. Girard Deceit, Desire and the Novel, Baltimore: Hopkins University Press, 1966; Violence and the Sacred, Baltimore: Hopkins University Press, 1977; Things Hidden Since the Foundation of the World, Stanford University Press, 1987; Battling to the End, Michigan State University Press, 2010.
  • [3] With the notable exception of studies on the bullying and artificial empathy, for example, R. Aylett & al. “Expressive characters in anti-bullying education” in Animating Expressive Characters for Social Interaction (L. Canamero & R. Aylett eds), Amsterdam: John Benjamins Publishing Company (2008) pp. 161-176; K. Dautenhahn & al. “Bullying behaviour, empathy and imitation: an attempted synthesis” in Imitation and Social Learning (C.L. Nehaniv & K. Dautenhahn, eds) Cambridge University Press (2007), pp. 323-340.
  • [4] For an overview see : M. Tomasello The Cultural Origin of Human Cognition, Harvard University Press, 1999; S. Hurley & N. Chater “Introduction” in Perspectives on Imitation, Vol. 1 Mechanisms of Imitation and Imitation in Animals (S. Hurley & N. Chater, eds) Cambridge Mass.: MIT Press (2005) pp. 1- 52; as well as L. Huber “Emulation learning: the integration of technical and social learning” in Imitation and Social Learning, pp. 427-439.
  • [5] A. Revel & J. Nadel “How to build an imitator” in Imitation and Social Learning, p. 286.
  • [6] C.L. Nehaniv “Nine billion correspondence problems” in Imitation and Social Learning, pp. 35-46.
  • [7] M. Carpenter & J. Call “The question of ‘what to imitate’: inferring goals and intentions from demonstrations” in Imitation and Social Learning, p. 138.
  • [8] At least in nature this is what is assumed, in experimental contexts the separation can be insured through the experimental setup.
  • [9] Of course the fact that we tend to reduce each other to particular use or function is also an important part of sociality.
  • [10] Paro has two different cries: a call and a cry that is a response when interacting with people.
  • [11] T. Shibata, K. Wada, T. Saito & K. Tanie “Psychological and social effects to elderly people by robot-assisted activity” in Animating Expressive Characters for Social Interaction (L. Canamero & R. Aylett, eds.) Amsterdam: John Benjamins Publishing Company, 2008, pp. 173-193.
  • [12] One could argue that in the terms of philosophy of science, pet therapy is the ‘proper function’ of Paro since it was designed for that end and if tokens of Paro exist today and become more numerous as opposed to other social robots it is precisely because it successfully fulfills this function. However it nonetheless remains that Paro accomplishes its proper function by not doing anything at all! This suggests that Paro constitutes a counter example to most theories of proper function rather than that its proper function is not to do anything at all!!
  • [13] Shibata et al., p. 184.
  • [14] These are abilities that dogs usually exhibit.
  • [15] H. Ishiguro, “Scientific Issues concerning Androids” in The International Journal of Robotics Research, 26:1 (2007), p. 8.
  • [16] However, in a case like this, “the proof of the pudding is in the eating” as we say. Nothing can replace the experience of interacting with Geminoid. This is why, ‘no’ is my answer, others may have a different experience.