What Consciousness does a Robot Need?
Next: Understanding and Awareness
Up: Making Robots Conscious of
Previous: Introduction
In some respects it is easy to provide computer programs with more
powerful introspective abilities than humans have. A computer program
can inspect itself, and many programs do this in a rather trivial way.
Namely, they compute check sums in order to verify that they have been
read into computer memory without modification.
It is easy to make available for inspection by the program the manuals
for the programming language used, the manual for the computer itself
and a copy of the compiler. A computer program can use this
information to simulate what it would do if provided with given
inputs. It can answer a question like: ``Would I print ``YES'' in
less than 1,000,000 steps for a certain input? A finite version of
Turing's argument that the halting problem is unsolvable tells
us that that a computer cannot in general answer questions about what
it would do in
steps in less than
steps. If it could, we (or
a computer program) could construct a program that would answer a
question about what it would do in
steps and then do the opposite.
Unfortunately, these easy forms of introspection are not especially
useful for intelligent behavior in many common sense information
situations.
We humans have rather weak memories of the events in our lives,
especially of intellectual events. The ability to remember its entire
intellectual history is possible for a computer program and can be
used by the program in modifying its beliefs on the basis of new
inferences or observations. This may prove very powerful.
To do the tasks we will give them, a robot will need at least the
following forms of self-consciousness, i.e. ability to observe its
own mental state. When we say that something is
observable, we mean that a suitable action by the robot
causes a sentence and possibly other data structures giving the result
of the observation to appear in the robot's consciousness.
We will give tentative formulas for some of the results of
observations. In this we take advantage of the ideas of [McCarthy, 1993]
and give a context for each formula. This makes the formulas shorter.
What
,
and
mean is determined in an outer context.
- Observing its physical body, recognizing the positions of its
effectors, noticing the relation of its body to the environment and
noticing the values of important internal variables, e.g. the state
of its power supply and of its communication channels.

[No reason why the robot shouldn't have three hands.]
- Observing that it does or doesn't know the value of a certain
term, e.g. observing whether it knows the telephone number of a
certain person. Observing that it does know the number or that it
can get it by some procedure is likely to be straightforward.


Deciding that it doesn't know and cannot infer the value of
a telephone number is what should motivate the robot to look in
the phone book or ask someone.
- Keeping a journal of physical and intellectual events
so it can refer to its past beliefs, observations and actions.
- Observing its goal structure and forming sentences about it.
Notice that merely having a stack of subgoals doesn't achieve this
unless the stack is observable and not merely obeyable.
- The robot may intend to perform a certain action. It
may later infer that certain possibilities are irrelevant in
view of its intentions. This requires the ability to observe
intentions.
- Observing how it arrived at its current beliefs.
Most of the important beliefs of the system will have been
obtained by nonmonotonic reasoning, and therefore are usually
uncertain. It will need to maintain a critical view of these
beliefs, i.e. believe meta-sentences about them that will aid
in revising them when new information warrants doing so. It will
presumably be useful to maintain a pedigree for each belief of
the system so that it can be revised if its logical ancestors
are revised. Reason maintenance systems maintain
the pedigrees but not in the form of sentences that can
be used in reasoning. Neither do they have
introspective subroutines that can observe the pedigrees
and generate sentences about them.
- Not only pedigrees of beliefs but other auxiliary information
should either be represented as sentences or be observable in such
a way as to give rise to sentences. Thus a system should be able to
answer the questions: ``Why do I believe
?'' or alternatively
``Why don't I believe
?''.
- Regarding its entire mental state up to the present as an
object, i.e. a context. (McCarthy 1993) discusses contexts as
formal objects. The ability to transcend one's present
context and think about it as an object is an important form of
introspection, especially when we compare human and machine
intelligence as Roger Penrose (1994) and other philosophical AI critics do.
- Knowing what goals it can currently achieve and what its choices
are for action. We claim that the ability to understand one's own
choices constitutes free will. The subject is discussed in
detail in [McCarthy and Hayes, 1969].
The above are only some of the needed forms of self-consciousness.
Research is needed to determine their properties and to
find additional useful forms of self-consciousness.
Next: Understanding and Awareness
Up: Making Robots Conscious of
Previous: Introduction
John McCarthy
Thu May 25 00:33:25 PDT 1995