Remarks



next up previous contents
Next: Acknowledgements Up: Making Robots Conscious of Previous: Robots Should Not

Remarks

  1. We do not give a definition of consciousness or self-consciousness in this article. We only give some properties of the consciousness phenomenon that we want robots to have together with some ideas of how to program robots accordingly.

  2. The preceding sections are not to be taken as a theory of human consciousness. We do not claim that the human brain uses sentences as its primary way of representing information. Allen Newell (1980) introduced the term logic level of analysis of a person or machine. The idea is that behavior can be understood as the person, animal or machine doing what it believes will achieve its goals. Ascribing beliefs and goals then accounts for much of its behavior. Daniel Dennett [Dennett, 1978] first introduced this idea, and it is also discussed in [McCarthy, 1979a].

    Of course, logical AI involves using actual sentences in the memory of the machine.

  3. Daniel Dennett [Dennett, 1991] argues that human consciousness is not a single place in the brain with every conscious idea appearing there. I think he is right about the human brain, but I think a unitary consciousness will work quite well for robots. It would likely also work for humans, but evolution happens to have produced a brain with distributed consciousness.

  4. Francis Crick [Crick, 1994] discusses how to find neurological correlates of consciousness in the human and animal brain. I agree with all the philosophy in his paper and wish success to him and others using neuroscience. However, after reading his book, I think the artificial intelligence approach has a good chance of achieving important results sooner. They won't be quite the same results, however.

  5. What about the unconscious? Do we need it for robots? Very likely we will need some intermediate computational processes whose results are not appropriately included in the set of sentences we take as the consciousness of the robot. However, they should be observable when this is useful, i.e. sentences giving facts about these processes and their results should appear in consciousness as a result of mental actions aimed at observing them. There is no need for a full-fledged Freudian unconscious with purposes of its own.

  6. Should a robot hope? In what sense might it hope? How close would this be to human hope? It seems that the answer is yes. If it hopes for various things, and enough of the hopes come true, then the robot can conclude that it is doing well, and its higher level strategy is ok. If its hopes are always disappointed, then it needs to change its higher level strategy.

    To use hopes in this way requires the self observation to remember what it hoped for.

    Sometimes a robot must also infer that other robots or people hope or did hope for certain things.

  7. The syntactic form is simple enough. If is a proposition, then is the proposition that the robot hopes for to become true. In mental situation calculus we would write

     

    to assert that in mental situation , the robot hopes for .

    Human hopes have certain qualities that I can't decide whether we will want. Hope automatically brings into consciousness thoughts related to what a situation realizing the hope would be like. We could design our programs to do the same, but this is more automatic in the human case than might be optimal. Wishful thinking is a well-known human malfunction.

  8. A robot should be able to wish that it had acted differently from the way it has done. A mental example is that the robot may have taken too long to solve a problem and might wish that it had thought of the solution immediately. This will cause it to think about how it might solve such problems in the future with less computation.

  9. A human can wish that his motivations and goals were different from what he observes them to be. It would seem that a program with such a wish could just change its goals.

  10. [Penrose, 1994] emphasizes that a human using a logical system is prepared to accept the proposition that the system is consistent even though it can't be inferred within the system. The human is prepared to iterate this self-confidence indefinitely. Our systems should do the same, perhaps using formalized transcendence. Programs with human capability in this respect will have to be able to regard logical systems as values of variables and infer general statements about them. We will elaborate elsewhere (McCarthy 1995b) our disagreement with Penrose about whether the human is necessarily superior to a computer program in these respects. For now we remark only that it would be interesting if he and others of similar opinion would say where they believe the efforts outlined in this article will get stuck.

  11. Penrose also argues (p. 37 et seq.) that humans have understanding and awareness and machines cannot have them. He defines them in his own way, but our usage is close enough to his so that I think we are discussing how to make programs do what he thinks they cannot do. I don't agree with those defenders of AI who claim that some computer programs already possess understanding and awareness to the necessary extent.

  12. Programs that represent information by sentences but generate new sentences by processes that don't correspond to logical reasoning present similar problems to logical AI for introspection. Approaches to AI that don't use sentences at all need some other way of representing the results of introspection if they are to use it at all.



next up previous contents
Next: Acknowledgements Up: Making Robots Conscious of Previous: Robots Should Not



John McCarthy
Thu May 25 00:33:25 PDT 1995