Introduction



next up previous contents
Next: What Consciousness does Up: Making Robots Conscious of Previous: Contents

Introduction

In this article we discuss consciousness with the methodology of logical AI. [McCarthy, 1989] contains a recent discussion of logical AI. TheRemarks section has a little about how our ideas about consciousness might apply to other AI methodologies. However, it seems that systems that don't represent information by sentences will be limited in the amount of self-consciousness they can have.

[McCarthy, 1959] proposed programs with common sense that represent what they know about particular situations and the world in general primarily by sentences in some language of mathematical logic. They decide what to do primarily by logical reasoning, i.e. when a logical AI program does an important action, it is usually because it inferred as sentence saying it should. There may be other data structures and programs, but the main decisions of what do by logical reasoning from sentences explicitly present in the robot's memory. Some of the sentences may get into memory by processes that run independently of the robot's decisions, e.g. facts obtained by vision. Developments in logical AI include situation calculus in various forms, logical learning, nonmonotonic reasoning in various forms, theories of concepts as objects [McCarthy, 1979b] and theories of contexts as objects [McCarthy, 1993]. [McCarthy, 1959] mentioned self-observation but wasn't specific.

There have been many programs that decide what do by logical reasoning with logical sentences. However, I don't know of any that are conscious of their own mental processes, i.e. bring sentences about the sentences generated by these processes into memory. We hope to establish in this article that some consciousness of their own mental processes will be required for robots to reach a level intelligence needed to do many of the tasks humans will want to give them. In our view, consciousness of self, i.e. introspection, is essential for human level intelligence and not a mere epiphenomenon. However, we need to distinguish which aspects of human consciousness should be modelled, which human qualities should not and where AI systems can go beyond human consciousnes.

For the purposes of this article a robot is a continuously acting computer program interacting with the outside world and not normally stopping. What physical senses and effectors or communication channels it has are irrelevant to this discussion except as examples.

In logical AI, robot consciousness may be designed as follows. At any time a certain set of sentences are directly available for reasoning. We say these sentences are in the robot's consciousness. Some sentences come into consciousness by processes that operate all the time, i.e. by involuntary subconscious processes. Others come into consciousness as a result of mental actions, e.g. observations of its consciousness, that the robot decides to take. The latter are the results of introspection.

Here's an example of human introspection. Suppose I ask you whether the President of the United States is standing, sitting or lying down at the moment, and suppose you answer that you don't know. Suppose I then ask you to think harder about it, and you answer that no amount of thinking will help. [See [Kraus, Perlis and Horty, 1991] for one formalization.] A certain amount of introspection is required to give this answer, and robots will need a corresponding ability if they are to decide correctly whether to think more about a question or to seek the information they require externally.

We discuss what forms of consciousness and introspection are required and how some of them may be formalized. It seems that the designer of robots has many choices to make about what features of human consciousness to include. Moreover, it is very likely that useful robots will include some introspective abilities not fully possessed by humans.

Two important features of consciousness and introspection are the ability to infer nonknowledge and the ability to do nonmonotonic reasoning.

Human-like emotional structures are possible but unnecessary for useful intelligent behavior. We will also argue that it is best not to include any that would cause people to feel sorry for or to dislike robots.



next up previous contents
Next: What Consciousness does Up: Making Robots Conscious of Previous: Contents



John McCarthy
Thu May 25 00:33:25 PDT 1995