Robots Should Not be Equipped with Human-like Emotions



next up previous contents
Next: Remarks Up: Formalized Self-Knowledge Previous: Observing its Motivations

Robots Should Not be Equipped with Human-like Emotions

Some authors, e.g. Sloman and Croucher [Sloman and Croucher, 1981], have argued that sufficiently intelligent robots would automatically have emotions somewhat like those of humans. We argue that it is possible to give robots human-like emotions, but it would require a special effort. Moreover, it would be a bad idea if we want to use them as servants. In order to make this argument, it is necessary to assume something, as little as possible, about human emotions. Here are some points.

  1. Human reasoning operates primarily on the collection of ideas of which the person is immediately conscious.

  2. Other ideas are in the background and come into consciousness by various processes.

  3. Because reasoning is so often nonmonotonic, conclusions can be reached on the basis of the ideas in consciousness that would not be reached if certain additional ideas were also in consciousness. gif

  4. Human emotions influence human thought by influencing what ideas come into consciousness. For example, anger brings into consciousness ideas about the target of anger and also about ways of attacking this target.

  5. Human emotions are strongly related to blood chemistry. Hormones and neurotransmitters belong to the same family of substances. The sight of something frightening puts certain substances in our blood streams, and these substances may reduce the thresholds of synapses where the dendrites have receptors for these substances. gif

  6. A design that uses environmental or internal stimuli to bring whole classes of ideas into consciousness is entirely appropriate for a lower animals. We inherit this mechanism from our animal ancestors.

  7. According to these notions, paranoia, schizophrenia, depression and other mental illnesses would involve malfunctions of the chemical mechanisms that bring ideas into consciousness. A paranoid who believes the Mafia or the CIA is after him and acts accordingly can lose these ideas when he takes his medicine and regain them when he stops. Certainly his blood chemistry cannot encode complicated paranoid theories, but they can bring ideas about threats from wherever or however they are stored.

These facts suggest the following design considerations.

  1. We don't want robots to bring ideas into consciousness in an uncontrolled way. Robots that are to react against people (say) considered harmful, should include such reactions in their goal structures and prioritize them together with other goals. Indeed we humans advise ourselves to react rationally to danger, insult and injury. ``Panic'' is our name for reacting directly to perceptions of danger rather than rationally.

  2. Putting such a mechanism in a robot is certainly feasible. It could be done by maintaining some numerical variables, e.g. level of fear, in the system and making the mechanism that brings sentences into consciousness (short term memory) depend on these variables. However, human-like emotional structures are not an automatic byproduct of human-level intelligence.

  3. It is also practically important to avoid making robots that are reasonable targets for either human sympathy or dislike. If robots are visibly sad, bored or angry, humans, starting with children, will react to them as persons. Then they would very likely come to occupy some status in human society. Human society is complicated enough already.



next up previous contents
Next: Remarks Up: Formalized Self-Knowledge Previous: Observing its Motivations



John McCarthy
Thu May 25 00:33:25 PDT 1995