next up previous
Next: Moody Zombies Up: Todd Moody's Zombies Previous: Basic Consciousness

Consciousness of Self

 

[McC95] discusses the kinds of consciousness of its own mental processes a robot will require in order to behave intelligently. Here are a few of them.

  1. Keeping a journal of physical and intellectual events so it can refer to its past beliefs, observations and actions.
  2. Observing its goal structure and forming sentences about it. Notice that merely having a stack of subgoals doesn't achieve this unless the stack is observable and not merely obeyable.
  3. The robot may intend to perform a certain action. It may later infer that certain possibilities are irrelevant in view of its intentions. This requires the ability to observe intentions.
  4. Observing how it arrived at its current beliefs. Most of the important beliefs of the system will have been obtained by nonmonotonic reasoning, and therefore are usually uncertain. It will need to maintain a critical view of these beliefs, i.e. believe meta-sentences about them that will aid in revising them when new information warrants doing so. It will presumably be useful to maintain a pedigree for each belief of the system so that it can be revised if its logical ancestors are revised. Reason maintenance systems maintain the pedigrees but not in the form of sentences that can be used in reasoning. Neither do they have introspective subroutines that can observe the pedigrees and generate sentences about them.
  5. Not only pedigrees of beliefs but other auxiliary information should either be represented as sentences or be observable in such a way as to give rise to sentences. Thus a system should be able to answer the questions: ``Why do I believe p?'' or alternatively ``Why don't I believe p?''.
  6. Regarding its entire mental state up to the present as an object, i.e. a context. [McC93] discusses contexts as formal objects. The ability to transcend one's present context and think about it as an object is an important form of introspection, especially when we compare human and machine intelligence as Roger Penrose (1994) and other philosophical AI critics do.
  7. Knowing what goals it can currently achieve and what its choices are for action. We claim that the ability to understand one's own choices constitutes free will. The subject is discussed in detail in [MH69].

Taken together these requirements for successful human-level goal achieving behavior amount to a substantial fraction of human consciousness. A human emotional structure is not required for robots.


next up previous
Next: Moody Zombies Up: Todd Moody's Zombies Previous: Basic Consciousness

John McCarthy
Fri Feb 28 07:25:22 PDT 1997