Artificial intelligence requires expressing this phenomenon formally, and we'll do it here in the mathematical logical language of situation calculus. Situation calculus is described in [MH69], [Sha97], [Rei01], and in the extended form used here, in [McC02]. Richmond Thomason in [Tho03] compares situation calculus to theories of action in the philosophical literature. As usually presented, situation calculus is a non-deterministic theory. The equation
Usually, there are some preconditions for the event to occur, and then we have the formula
[McC02] proposes adding a formula to the language
that can be used to assert that the event
occurs in situation
.
We have
Adding occurrence axioms, which assert that certain actions occur, makes a theory more deterministic by specifying that certain events occur in situations satisfying specified conditions. In general the theory will remain partly non-deterministic, but if there are occurrence axioms specifying what events occur in all situations, then the theory becomes deterministic, i.e. has linear time.
We can now give a situation calculus theory for SDFW illustrating the role of a non-deterministic theory in determining what will deterministically happen, i.e. by saying what choice a person or machine will make.
In these formulas, lower case terms denote variables and capitalized
terms denote constants.
Suppose that has a choice of just two actions
and
that he may perform in situation
. We want to say that the event
actor
or
actor
occurs in
according to
which of
actor
or
actor
prefers.
The formulas that assert that a person (actor) will do the action that he, she or it thinks results in the better situation for him are
and
Adding (2) makes the theory determinist by specifying which choice us made.1
Here
is to be understood as
asserting that
prefers
to
.
Here's a non-deterministic theory of greedy John.
From equations 1-3 we can infer
For simplicity, we have omitted the axioms asserting that and
are exactly the actions available and the nonmonotonic reasoning
used to derive the conclusion.
Here
is to be understood as
asserting that
prefers
to
. I
used just two actions to keep the formula for
short. Having
more actions or even making
probabilistic or quantum would
not change the nature of SDFW. A substantial theory of
is beyond the scope of this article.
This illustrates the role of the non-deterministic theory of
within a deterministic theory of what occurs. (1)
includes the non-deterministic of
used to compute which
action leads to the better situation. (2) is the
deterministic part that tells which action occurs.
We make four claims.
1. Effective AI systems, e.g. robots, will require identifying and reasoning about their choices once they get beyond what can be achieved with situation-action rules. Chess programs always have.
2. The above theory captures the most basic feature of human free will.
3. and
, as they are computed by the
agent, are not full states of the world but elements of some
theoretical space of approximate situations the agent uses in making
its decisions. [McC00] has a discussion of approximate
entities. Part of the problem of building human-level AI lies in
inventing what kind of entity
shall be taken to be.
4. Whether a human or an animal uses simple free will in a type of situation is subject to experimental investigation--as discussed in section 7.
Formulas (1) and (2) illustrate
making a choice. They don't say anything about
knowing it has choices or preferring situations in which it has more
choices. SDFW is therefore a partial theory that requires extension
when we need to account for these phenomena.