next up previous
Next: A generalization of SDFW Up: SIMPLE DETERMINISTIC FREE WILL Previous: The Informal theory

Situation calculus formulas for SDFW

Artificial intelligence requires expressing this phenomenon formally, and we'll do it here in the mathematical logical language of situation calculus. Situation calculus is described in [MH69], [Sha97], [Rei01], and in the extended form used here, in [McC02]. Richmond Thomason in [Tho03] compares situation calculus to theories of action in the philosophical literature. As usually presented, situation calculus is a non-deterministic theory. The equation


\begin{displaymath}
s' = Result(e,s)
\end{displaymath}

asserts that $s'$ is the situation that results when event $e$ occurs in the situation $s$. Since there may be many different events that can occur in $s$, and the theory of the function $Result$ does not say which occurs, the theory is non-deterministic. Some AI jargon refers to it as a theory with branching time rather than linear time. Actions are a special case of events, but most AI work discusses only actions.

Usually, there are some preconditions for the event to occur, and then we have the formula


\begin{displaymath}
Precond(e,s) \rightarrow s' = Result(e,s).
\end{displaymath}

[McC02] proposes adding a formula $Occurs(e,s)$ to the language that can be used to assert that the event $e$ occurs in situation $s$. We have


\begin{displaymath}
Occurs(e,s) \rightarrow (Next(s) = Result(e,s)).
\end{displaymath}

Adding occurrence axioms, which assert that certain actions occur, makes a theory more deterministic by specifying that certain events occur in situations satisfying specified conditions. In general the theory will remain partly non-deterministic, but if there are occurrence axioms specifying what events occur in all situations, then the theory becomes deterministic, i.e. has linear time.

We can now give a situation calculus theory for SDFW illustrating the role of a non-deterministic theory in determining what will deterministically happen, i.e. by saying what choice a person or machine will make.

In these formulas, lower case terms denote variables and capitalized terms denote constants. Suppose that $actor$ has a choice of just two actions $a1$ and $a2$ that he may perform in situation $s$. We want to say that the event $Does($actor$,a1)$ or $Does($actor$,a2)$ occurs in $s$ according to which of $Result(Does($actor$,a1),s)$ or $Result(Does($actor$,a2),s)$ $actor$ prefers.

The formulas that assert that a person (actor) will do the action that he, she or it thinks results in the better situation for him are


\begin{displaymath}
\begin{array}[l]{l}
Occurs(Does(actor,Choose(actor,a1,a2,s),s),s),
\end{array}\end{displaymath} (1)

(1)

and


\begin{displaymath}
\begin{array}[l]{l}
Choose(actor,a1,a2,s) =  \textbf{if...
...lt(a2,s))\\
\textbf{then} a1 \textbf{else} a2.
\end{array}\end{displaymath} (2)

Adding (2) makes the theory determinist by specifying which choice us made.1

Here $\mbox{Prefers}(actor, s1,s2)$ is to be understood as asserting that $actor$ prefers $s1$ to $s2$.

Here's a non-deterministic theory of greedy John.


\begin{displaymath}
\begin{array}[l]{l}
Result(A1,S0) = S1, \\
Result(A2,S0)...
...d\quad\quad \rightarrow \mbox{Prefers}(John,s,s').
\end{array}\end{displaymath} (3)

As we see, greedy John has a choice of at least two actions in situation $S0$ and prefers a situation in which he has greater wealth to one in which he has lesser wealth.

From equations 1-3 we can infer

\begin{displaymath}
\begin{array}[l]{l}
Occurs(Does(John,A1,S0)).
\end{array}\end{displaymath} (4)

For simplicity, we have omitted the axioms asserting that $A1$ and $A2$ are exactly the actions available and the nonmonotonic reasoning used to derive the conclusion.

Here $\mbox{Prefers}(actor, s1,s2)$ is to be understood as asserting that $actor$ prefers $s1$ to $s2$. I used just two actions to keep the formula for $Choose$ short. Having more actions or even making $Result$ probabilistic or quantum would not change the nature of SDFW. A substantial theory of $\mbox{Prefers}$ is beyond the scope of this article.

This illustrates the role of the non-deterministic theory of $Result$ within a deterministic theory of what occurs. (1) includes the non-deterministic of $Result$ used to compute which action leads to the better situation. (2) is the deterministic part that tells which action occurs.

We make four claims.

1. Effective AI systems, e.g. robots, will require identifying and reasoning about their choices once they get beyond what can be achieved with situation-action rules. Chess programs always have.

2. The above theory captures the most basic feature of human free will.

3. $Result(a1,s)$ and $Result(a2,s)$, as they are computed by the agent, are not full states of the world but elements of some theoretical space of approximate situations the agent uses in making its decisions. [McC00] has a discussion of approximate entities. Part of the problem of building human-level AI lies in inventing what kind of entity $Result(a,s)$ shall be taken to be.

4. Whether a human or an animal uses simple free will in a type of situation is subject to experimental investigation--as discussed in section 7.

Formulas (1) and (2) illustrate $person$ making a choice. They don't say anything about $person$ knowing it has choices or preferring situations in which it has more choices. SDFW is therefore a partial theory that requires extension when we need to account for these phenomena.


next up previous
Next: A generalization of SDFW Up: SIMPLE DETERMINISTIC FREE WILL Previous: The Informal theory
John McCarthy
2005-11-06