We propose to design robot consciousness with explicitly represented beliefs as follows. At any time a certain set of sentences are directly available for reasoning. We call these the robot's awareness. Some of them, perhaps all, are available for observation, i.e. processes can generate sentences about these sentences. These sentences constitute the robot's consciousness. In this article, we shall consider the awareness and the consciousness to coincide; it makes the discussion shorter.
Some sentences come into consciousness by processes that operate all the time, i.e. by involuntary subconscious processes. Others come into consciousness as a result of mental actions, e.g. observations of its consciousness, that the robot decides to take. The latter are the results of introspection and constitute self-consciousness.
Here's an example of human introspection. Suppose I ask you whether
the President of the United States is standing, sitting or lying down
at the moment, and suppose you answer that you don't know. Suppose I
then ask you to think harder about it, and you answer that no amount
of thinking will help. [Kraus et al., 1991] has one
formalization. A certain amount of introspection is required to give
this answer, and robots will need a corresponding ability if they are
to decide correctly whether to think more about a question or to seek
the information they require externally.
We discuss what forms of consciousness and introspection are required for robots and how some of them may be formalized. It seems that the designer of robots has many choices to make about what features of human consciousness to include. Moreover, it is very likely that useful robots will include some introspective abilities not fully possessed by humans.
Two important features of consciousness and introspection are the ability to infer nonknowledge and the ability to do nonmonotonic reasoning.