next up previous contents
Next: Acknowledgements Up: Making Robots Conscious Previous: Robots Should Not be

Remarks

 

  1. In [Nagel, 1974], Thomas Nagel wrote ``Perhaps anything complex enough to behave like a person would have experiences. But that, if true, is a fact that cannot be discovered merely by analyzing the concept of experience.''. This article supports Nagel's conjecture, both in showing that complex behavior requires something like conscious experience, and in that discovering it requires more than analyzing the concept of experience.
  2. Already [Turing, 1950] disposes of ``the claim that a machine cannot be the subject of its own thought''. Turing further remarks
    By observing the results of its own behavior it can modify its own programs so as to achieve some purpose more effectively. These are possibilities of the near future rather than Utopian dreams.
    We want more than than Turing explicitly asked for. The machine should oberve its processes in action and not just the results.
  3. The preceding sections are not to be taken as a theory of human consciousness. We do not claim that the human brain uses sentences as its primary way of representing information.

    Of course, logical AI involves using actual sentences in the memory of the machine.

  4. Daniel Dennett [Dennett, 1991] argues that human consciousness is not a single place in the brain with every conscious idea appearing there. I think he is partly right about the human brain, but I think a unitary consciousness will work quite well for robots. It would likely also work for humans, but evolution happens to have produced a brain with distributed consciousness.
  5. John H. Flavell, [Flavell and O'Donnell, 1999] and [John H. Flavell and Flavell, 2000], and his colleagues describe experiments concerning the introspective abilities of people ranging from 3 years old to adulthood. Even 3 year olds have some limited introspective abilities, and the ability to report on their own thoughts and infer the thoughts of others grows with age. Flavell, et. al. reference other work in this area. This is apparently a newly respectable area of experimental psychology, since the earliest references are from the late 1980s.
  6. Francis Crick [Crick, 1995] discusses how to find neurological correlates of consciousness in the human and animal brain. I agree with all the philosophy in his paper and wish success to him and others using neuroscience. However, after reading his book, I think the logical artificial intelligence approach has a good chance of achieving human-level intelligence sooner. They won't tell as much about human intelligence, however.
  7. What about the unconscious? Do we need it for robots? Very likely we will need some intermediate computational processes whose results are not appropriately included in the set of sentences we take as the consciousness of the robot. However, they should be observable when this is useful, i.e. sentences giving facts about these processes and their results should appear in consciousness as a result of mental actions aimed at observing them. There is no need for a full-fledged Freudian unconscious with purposes of its own.
  8. Should a robot hope? In what sense might it hope? How close would this be to human hope? It seems that the answer is yes and quite similar.. If it hopes for various things, and enough of the hopes come true, then the robot can conclude that it is doing well, and its higher level strategy is ok. If its hopes are always disappointed, then it needs to change its higher level strategy.

    To use hopes in this way requires the self observation to remember what it hoped for.

    Sometimes a robot must also infer that other robots or people hope or did hope for certain things.

  9. The syntactic form is simple enough. If p is a proposition, then Hope(p) is the proposition that the robot hopes for p to become true. In mental situation calculus we would write

      equation330

    to assert that in mental situation s, the robot hopes for p.

    Human hopes have certain qualities that I can't decide whether we will want. Hope automatically brings into consciousness thoughts related to what a situation realizing the hope would be like. We could design our programs to do the same, but this is more automatic in the human case than might be optimal. Wishful thinking is a well-known human malfunction.

  10. A robot should be able to wish that it had acted differently from the way it has done. A mental example is that the robot may have taken too long to solve a problem and might wish that it had thought of the solution immediately. This will cause it to think about how it might solve such problems in the future with less computation.
  11. A human can wish that his motivations and goals were different from what he observes them to be. It would seem that a program with such a wish could just change its goals. However, it may not be so simple if different subgoals each gives rise to wishes, e.g. that the other subgoals were different.
  12. Programs that represent information by sentences but generate new sentences by processes that don't correspond to logical reasoning present similar problems to logical AI for introspection. Approaches to AI that don't use sentences at all need some other way of representing the results of introspection if they are to use it at all.
  13. Psychologists and philosophers from Aristotle on have appealed to association as the main tool of thought. It is clearly inadequate to draw conclusions. We can make sense of their ideas by regarding association as the main tool for bringing facts into consciousness, but requiring reasoning to reach conclusions.
  14. Some conclusions are reached by deduction, some by nonmonotonic reasoning and some by looking for models--alternatively by reasoning in second order logic.
  15. Case based reasoning. Cases are relatively rich objects--or maybe we should say locally rich.


next up previous contents
Next: Acknowledgements Up: Making Robots Conscious Previous: Robots Should Not be

John McCarthy
Mon Jul 15 13:06:22 PDT 2002