next up previous
Next: Of what are we Up: NOTES ON SELF-AWARENESS Previous: NOTES ON SELF-AWARENESS


Introduction

Developing self-aware computer systems will be an interesting and challenging project. It seems to me that the human forms of self-awareness play an important role in humans achieving our goals and will also be important for advanced computer systems. However, I think they will be difficult to implement in present computer formalisms, even in the most advanced logical AI formalisms. The useful forms of computer agent self-awareness will not be identical with the human forms. Indeed many aspects of human self-awareness are bugs and will not be wanted in computer systems. [McCarthy 1996] includes a discussion of this and other aspects of robot consciousness.

Nevertheless, for now, human self-awareness, as observed introspectively is the best clue. Introspection may be more useful than the literature of experimental psychology, because it gives more ideas, and the ideas can be checked for usefulnes by considering programs that implement them. Moreover, at least in the beginning of the study of self-awareness, we should be ontologically promiscuous, e.g. we should not identify intentions with goals Significant differences may become apparent, and we can always squeeze later. 1

Some human forms of self-awareness are conveniently and often linguistically expressed and others are not. For example, one rarely has occasion to announce the state of tension in ones muscles. However, something about it can be expressed if useful. How the sensation of blue differs from the sensation of red apparently cannot be verbally expressed. At least the qualia-oriented philosophers have put a lot of effort into saying so. What an artificial agent can usefully express in formulas need not correspond to what humans ordinarily say, or even can say. In general, computer programs can usefully be given much greater powers of self-awareness than humans have, because every component of the state of the machine or its memory can be made accessible to be read by the program.

A straightforward way of logically formalizing self-awareness is in terms of a mental situation calculus with certain observable fluents. The agent is aware of the observable mental fluents and their values. A formalism with mental situations and fluents will also have mental events including actions, and their occurrence will affect the values of the observable fluents. I advocate the form of situation calculus proposed in [McCarthy 2002].

Self-awareness is continuous with other forms of awareness. Awareness of being hot and awareness of the room being hot are similar. A simple fluent of which a person is aware is hunger. We can write $Hungry(s)$ about a mental situation $s$, but we write $Holds(Hungry,s)$, then $Hungry$ can be the value of bound variables.. Anohter advantage is that now $Hungry$ is an object, and the agent can compare $Hungry$ with $Thirsty$ or $Bored$. I'm not sure where the object $Hunger$ comes in, but I'm pretty sure our formalism should have it and not just $Hungry$. We can even use $Holds(Applies(Hunger,I),s)$ but tolerate abbreviations, especially in contexts. 234

Our goal in this research is an epistemologically adequate formalism in the sense of [McCarthy and Hayes 1969] for representing what a person or robot can actually learn about the world. In this case, the goal is to represent facts of self-awareness of a system, both as an internal language for the system and as an external language for the use of people or other systems.

Basic entities, e.g. automaton states as discussed in [McCarthy and Hayes 1969] or neural states may be good for devising theories at present, but we cannot express what a person or robot actually knows about its situation in such terms.


next up previous
Next: Of what are we Up: NOTES ON SELF-AWARENESS Previous: NOTES ON SELF-AWARENESS
John McCarthy
2004-04-11