What Will Self-Aware Computer Systems Be?
John McCarthy, Stanford University
mccarthy@stanford.edu
http://www-formal.stanford.edu/jmc/
November 16, 2006
#tex2html_wrap_inline106# Darpa Wants To Know, And There's A Workshop Tomorrow.
#tex2html_wrap_inline108# The Subject Is Ready For Basic Research.
#tex2html_wrap_inline110# Short Term Applications May Be Feasible.
#tex2html_wrap_inline112# Self-Awareness Is Mainly Applicable To Programs With
Persistent Existence.
WHAT WILL SELF-AWARE SYSTEMS BE AWARE OF?
#tex2html_wrap_inline114# Easy aspects of state: battery level, memory available, etc.
#tex2html_wrap_inline116# Ongoing activities: serving users, driving a car
#tex2html_wrap_inline118# Knowledge and lack of knowledge
#tex2html_wrap_inline120# purposes, intentions, hopes, fears, likes, dislikes
#tex2html_wrap_inline122# Actions it is free to choose among relative to external
constraints. That's where free will comes from.
#tex2html_wrap_inline124# Permanent aspects of mental state, e.g. long term goals,
beliefs,
#tex2html_wrap_inline126# Episodic memory---only partial in humans, probably
absent in animals, but readily available in computer systems.
HUMAN SELF-AWARENESS---1
#tex2html_wrap_inline128# Human self-awareness is weak but improves with age.
#tex2html_wrap_inline130# Five year old but not three year old. I used to think the
box contained candy because of the cover, but now I know it has
crayons. He will think it contans candy,
#tex2html_wrap_inline132# Simple examples: I'm hungry, my left knee hurts from a
scrape, my right knee feels normal, my
right hand is making a fist.
#tex2html_wrap_inline134# Intentions: I intend to have dinner, I intend to visit New
Zealand some day. I do not intend to die.
#tex2html_wrap_inline136# I exist in time with a past and a future. Philosophers
argue a lot about what this means and how to represent it.
#tex2html_wrap_inline138# Permanent aspects of ones mind: I speak English and a
little French and Russian. I like hamburgers and caviar. I cannot
know my blood pressure without measuring it.
HUMAN SELF-AWARENESS---2
#tex2html_wrap_inline140# What are my choices? (Free will is having choices.)
#tex2html_wrap_inline142# Habits: I know I often think of you. I often have
breakfast at the Pennsula Creamery.
#tex2html_wrap_inline144# Ongoing processes: I'm typing slides and also getting
hungry.
#tex2html_wrap_inline146# Juliet hoped there was enough poison in Romeo's vial to
kill her.
#tex2html_wrap_inline148# More: fears, wants (sometimes simultaneous but
incompatible)
#tex2html_wrap_inline150# Permanent compared with instantaneous wants.
MENTAL EVENTS (INCLUDING ACTIONS)
#tex2html_wrap_inline152# consider
#tex2html_wrap_inline154# Infer
#tex2html_wrap_inline156# decide
#tex2html_wrap_inline158# ccdhoose to believe
#tex2html_wrap_inline160# remember
#tex2html_wrap_inline162# forget
#tex2html_wrap_inline164# realize
#tex2html_wrap_inline166# ignore
MACHINE SELF-AWARENESS
#tex2html_wrap_inline168# Easy self-awareness: battery state, memory left
#tex2html_wrap_inline170# Straightorward s-a: the program itself, the programming
language specs, the machine specs.
#tex2html_wrap_inline172# Self-simulation: Any given number of steps,
can't do in general ``Will I ever stop?'', ``Will I
stop in less than #tex2html_wrap_inline174# steps in general---in less than #tex2html_wrap_inline176# steps.
#tex2html_wrap_inline178# Its choices and their inferred consequences
(free will).
#tex2html_wrap_inline180# ``I hope it won't rain tomorrow''. Should a machine hope
and be aware that it hopes? I think it should sometimes.
#tex2html_wrap_inline182# #math32##tex2html_wrap_inline184#, so I'll have to look it
up.
WHY WE NEED CONCEPTS AS OBJECTS
We had
#math33##tex2html_wrap_inline186#, and I'll have to look it
up.
Suppose #math34##tex2html_wrap_inline188#. If we write
#math35##tex2html_wrap_inline190#, then substitution would give
#math36##tex2html_wrap_inline192#, which doesn't make sense.
There are various proposals for getting around this. The most
advocated is some form of modal logic. My proposal is to regard
individual concepts as objects, and represent them by different
symbols, e.g. doubling the first letter.
There's more about why this is a good idea in my ``First order
theories of individual concepts and propositions''
WE ALSO NEED CONTEXTS AS OBJECTS
We write
#math37#
#displaymath62#
to assert #tex2html_wrap_inline194# while in the context #tex2html_wrap_inline196#. Terms also can be written
using contexts. #tex2html_wrap_inline198# is an expression #tex2html_wrap_inline200# in the context #tex2html_wrap_inline202#.
The main application of contexts as objects is to assert relations
between the objects denoted by different expressions in different
contexts. Thus we have
#math38#
#displaymath64#
or, more generally,
#math39#
#displaymath66#
Such relations between expressions in different contexts allows using
a situation calculus theory in which the actor is not explicitly
represented in an outer context in which there is more than one actor.
We also need to express the relation between an external context
in which we refer to the knowledge and awareness of AutoCar1 and
AutoCar1's internal context in which it can use ``I''.
SELF-AWARENESS EXPRESSED IN LOGICAL FORMULAS---1
Pat is aware of his intention to eat dinner at home.
#math41##tex2html_wrap_inline204# is a context. #tex2html_wrap_inline206# denotes
the general act of eating dinner, logically different from
eating #tex2html_wrap_inline208#.
#math42##tex2html_wrap_inline210# is what you get when
you apply the modifier ``at home'' to the act of eating dinner.
#tex2html_wrap_inline212# says that I
intend #tex2html_wrap_inline214#. The use of #tex2html_wrap_inline216# is appropriate within the context of a
person's (here Pat's) awareness.
We should extend this to say that Pat will eat dinner at home unless
his intention changes. This can be expressed by formulas like
in the notation of [#McC02##1###].
FORMULAS---2
#tex2html_wrap_inline218# AutoCar1 is driving John from Office to Home. AutoCar1 is
aware of this. Autocar1 becomes aware that it is low on hydrogen.
AutoCar1 is permanently aware that it must ask permission to stop
for gas, so it asks for permission. Etc., Etc. These facts are
expressed in a context #tex2html_wrap_inline220#.
QUESTIONS
#tex2html_wrap_inline222# Does the lunar explorer require self-awareness? What about
the entries in the recent DARPA contest?
#tex2html_wrap_inline224# Do self-aware reasoning systems require dealing with
referential opacity? What about explicit contexts?
#tex2html_wrap_inline226# Where does tracing and journaling involve self-awareness?
#tex2html_wrap_inline228# Does an online tutoring program (for example, a program that
teaches a student Chemistry) need to be self aware?
#tex2html_wrap_inline230# What is the simplest self-aware system?
#tex2html_wrap_inline232# Does self-awareness always involve self-monitoring?
#tex2html_wrap_inline234# In what ways does self-awareness differ from awareness of
other agents? Does it require special forms of representation or
architecture?