next up previous
Next: Mental actions Up: NOTES ON SELF-AWARENESS Previous: Introduction

Of what are we aware, and of what should computers be aware?

Humans are aware of many different aspects of their minds. Here are samples of kinds of self-awareness--alas not a classification.

  1. Permanent aspects of self and their relations to each other and aspects of other persons.

    Thus I am human like other humans. [I am a small child, and I am "supposed to do" what the others are doing. This is innate or learned very early on the basis of an innate predisposition to learn it.5

    What might we want an artificial agent to know about the fact that it is one among many agents? It seems to me that the forms in which self-awareness develops in babies and children are likely to be particularly suggestive for what we will want to build into computers.

  2. I exist in time. This is distinct from having facts about particular time, but what use can we make of the agent knowing this fact--or even how is the fact to be represented?

  3. I don't know Spanish but can speak Russian and French a little. Similarly I have other skills.

    It helps to organize as much as possible of a system's knowledge as knowledge of permanent entities.

  4. I often think of you. I often have breakfast at Caffe Verona.

  5. Ongoing processes

    I am driving to the supermarket. One is aware of the past of the process and also of its future. Awareness of its present depends on some concept of the ``extended now''.

    Temporary phenomena

  6. Wants, intentions and goals:

    Wants can apply to both states and actions. I want to be healthy, wealthy and wise. I want to marry Yumyum and plan to persuade her guardian Koko to let me.

  7. I intend to walk home from my office, but if someone offers me a ride, I'll take it. I intend to give X a ride home, but if X doesn't want it, I won't.
  8. If I intend to drive via Pensacola, Florida, I'll think about visiting Pat Hayes.

    I suppose you can still haggle, and regard intentions as goals, but if you do you are likely to end up distinguishing a particular kind of goal corresponding to what the unsophisticated call an intention.

  9. Attitudes

    Attitudes towards the future:
    hopes, fears, goals, expectations, anti-expectations, intentions action: predict, want to know, promises and commitments.

    Attitudes toward the past:
    regrets, satisfactions, counterfactuals
    I'm aware that I regret having offended him. I believe that if I hadn't done so, he would have supported my position in this matter. It looks like a belief is a kind of weak awareness.

    Attitudes to the present:
    satisfaction, I see a dog. I don't see the dog. I wonder where the dog has gone.

    There are also attitudes toward timeless entities, e.g. towards kinds of people and things. I like strawberry ice cream but not chocolate chip.

  10. Hopes: A person can observe his hopes. I hope it won't rain tomorrow. Yesterday I hoped it wouldn't rain today. I think it will be advantageous to equip robots with mental qualities we can appropriately call hopes.

  11. Fears: I fear it will rain tomorrow. Is a fear just the opposite of a hope? Certainly not in humans, because the hormonal physiology is different, but maybe we could design it that way in robots. Maybe, but I wouldn't jump to the conclusion that we should.

    Why are hopes and fears definite mental objects? The human brain is always changing but certain structures can persist. Specific hopes and fears can last for years and can be observed. It is likely to be worthwhile to build such structures into robot minds, because they last much longer than specific neural states.

  12. An agent may observe that it has incompatible wants.

next up previous
Next: Mental actions Up: NOTES ON SELF-AWARENESS Previous: Introduction
John McCarthy