Mental events change the situation just as do physical events.
Here is a list of some mental events, mostly described informally.
. The robot learns the fact
. An obvious consequence
is
provided the effects are definite enough to justify the
formalism. More likely we'll want something like
where
is a point fluent asserting that
occurs (instantaneously) in situation
.
is the proposition
that the proposition
will be true at some time in the future. The
temporal function
is used in conjunction with the function
and the axiom
Here
denotes the next situation following
in which
holds. (10) asserts that if
holds in
,
then there is a next situation in which
holds. (This
is not
the
of some temporal logic formalism.)
has an effect on the rest of its knowledge.
We are not yet ready to propose one of the many belief revision
systems for this. Indeed we don't assume logical closure.
? Forgetting
is definitely
not an event with a definite result. What we can say is
In general, we shall want to treat forgetting as a side-effect of some
more complex event. Suppose
is the more complex event. We'll have
. This has the property:
The distinction is that
is an event, and we often don't need
to reason about how long it takes.
is
a fluent that persists until something changes it. Some call these
point fluents and continuous fluents respectively.
, e.g. for the sake of
argument. The effect of this action is not exactly to believe
,
but maybe it involves entering a context (see (McCarthy 1993))
in which
holds.
from other sentences, either by
deduction or by some nonmonotonic form of inference.
Formalizing other effects of seeing an object require a theory of seeing that is beyond the scope of this article.
It should be obvious to the reader that we are far from having a comprehensive list of the effects of mental events. However, I hope it is also apparent that the effects of a great variety of mental events on the mental part of a situation can be formalized. Moreover, it should be clear that useful robots will need to observe mental events and reason with facts about their effects.
Most work in logical AI has involve theories in which it can be shown that a sequence of actions will achieve a goal. There are recent extensions to concurrent action, continuous action and strategies of action. All this work applies to mental actions as well.
Mostly outside this work is reasoning leading to the conclusion that a goal cannot be achieved. Similar reasoning is involved in showing that actions are safe in the sense that a certain catastrophe cannot occur. Deriving both kinds of conclusion involves inductively inferring quantified propositions, e.g. ``whatever I do the goal won't be achieved'' or ``whatever happens the catastrophe will be avoided.'' This is hard for today's automated reasoning techniques, but Reiter (199x) has made important progress.