A system operating only with situation-action rules in which an action in a situation is determined directly from the characteristics of the situation does not involve free will. Much human action and almost all animal action reacts directly to the present situation and does not involve anticipating the consequences of alternative actions.
One of the effects of practicing an action is to remove deliberate choice from the computation and to respond immediately to the stimulus. This is often, but not always, appropriate.
Human free will, i.e. considering the consequences of action, is surely the product of evolution.
Do animals, even apes, ever make decisions based on comparing anticipated consequences? Almost always no. Thus when a frog sees a fly and flicks out its tongue to catch it, the frog is not comparing the consequences of catching the fly with the consequences of not catching the fly.
One computer scientist claims that dogs (at least his dog) consider the consequences of alternate actions. I'll bet the proposition can be tested, but I don't yet see how.
According to Dennett (phone conversation), some recent experiments suggest that apes sometimes consider the consequences of alternate actions. If so, they have free will in the sense of this article.
If not even apes ordinarily compare consequences, maybe apes can be trained to do it.
Chess programs do compare the consequences of various moves, and so have free will in the sense of this article. Present programs are not conscious of their free will, however. [McC96] discusses what consciousness computer programs need.
People and chess programs carry thinking about choice beyond the first level. Thus ``If I make this move, my opponent (or nature regarded as an opponent) will have the following choices, each of which will give me further choices.'' Examining such trees of possibilities is an aspect of free will in the world, but the simplest form of free will in a deterministic world does not involve branching more than once.
Daniel Dennett [Den78] and [Den03] argue that a system having free will depends on it being complex. I don't agree, and it would be interesting to design the simplest possible system exhibiting deterministic free will. A program for tic-tac-toe is simpler than a chess program, but the usual program does consider choices.
However, the number of possible tic-tac-toe positions is small enough so that one could make a program with the same external behavior that just looked up each position in a table to determine its move. Such a program would not have SDFW. Likewise, Ken Thompson has built chess programs for end games with five or fewer pieces on the board that use table lookup rather than look-ahead. See [Tho86]. Thus whether a system has SDFW depends on its structure and not just on its behavior. Beyond 5 pieces, direct lookup in chess is infeasible, and all present chess programs for the full game use look-ahead, i.e. they consider alternatives for themselves and their opponents. I'll conjecture that successful chess programs must have at least SDFW. This is not the only matter in which quantitative considerations make a philosophical difference. Thus whether the translation of a text is indeterminate depends on the length of the text.
Simpler systems than tic-tac-toe programs with SDFW are readily constructed. The theory of greedy John formalized by (3) may be about as simple as possible and still involves free will.
Essential to having any kind of free will is knowledge of one's choices of action and choosing among them. In many environments, animals with at least SDFW are more likely to survive than those without it. This seems to be why human free will evolved. When and how it evolved, as with other questions about evolution, won't be easy to answer.
Gary Drescher [Dre91] contrasts situation-action laws with what he calls the prediction-value paradigm. His prediction-value paradigm corresponds approximately to the deterministic free will discussed in this article.
I thank Drescher for showing me his forthcoming [Dre06]. His notion of choice system corresponds pretty well to SDFW, although it is imbedded in a more elaborate context.
This article benefited from discussions with Johan van Benthem, Daniel Dennett, Gary Drescher, and Jon Perry. The work was partly supported by the Defense Advanced Research Projects Agency (DARPA).