Computer Science Department
1995 July 24 to July 15, 2002
Conscious knowledge and other information is distinguished from unconscious information by being observable, and its observation results in conscious knowledge about it. We call this introspective knowledge.
A robot will need to use introspective knowledge in order to operate in the common sense world and accomplish the tasks humans will give it.
Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found feasible and useful in limited domains.
We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind.
A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally.
Programs with much introspective consciousness do not yet exist.
Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior. If the advocates of qualia are right, it looks like robots won't need them to exhibit any behavior exhibited by humans.