In the foregoing we have taken the representation of the situation as a system of interacting subautomata for granted. Indeed if you want to take them for granted you can skip this section.

However, a given overall automaton system might be represented as a system of interacting subautomata in a number of ways, and different representations might yield different results about what a given subautomaton can achieve, what would have happened if some subautomaton had acted differently, or what caused what. Indeed, in a different representation, the same or corresponding subautomata might not be identifiable. Therefore, these notions depend on the representation chosen.

For example, suppose a pair of Martians observe the situation in a room. One Martian analyzes it as a collection of interacting people as we do, but the second Martian groups all the heads together into one subautomaton and all the bodies into another. How is the first Martian to convince the second that his representation is to be preferred? Roughly speaking, he would argue that the interaction between the heads and bodies of the same person is closer than the interaction between the different heads, and so more of an analysis has been achieved from `the primordial muddle' with the conventional representation. He will be especially convincing when he points out that when the meeting is over the heads will stop interacting with each other, but will continue to interact with their respective bodies.

We can express this kind of argument formally in terms of automata as
follows: Suppose we have an autonomous automaton *A*, i.e. an
automaton without inputs, and let it have *k* states. Further, let
*m* and *n* be two integers such that . Now label *k*
points of an *m*-by-*n* array with the states of *A*. This can be
done in ways. For each of these
ways we have a representation of the automaton *A* as a system of an
*m*-state automaton *B* interacting with an *n*-state automaton *C*.
Namely, corresponding to each row of the array we have a state of *B*
and to each column a state of *C*. The signals are in 1-1
correspondence with the states themselves; thus each subautomaton has
just as many values of its output as it has states.

Now it may happen that two of these signals are equivalent in their
effect on the other subautomaton, and we use this equivalence relation
to form equivalence classes of signals. We may then regard the
equivalence classes as the signals themselves. Suppose then that
there are now *r* signals from *B* to *C* and *s* signals from *C* to
*B*. We ask how small *r* and *s* can be taken in general compared to
*m* and *n*. The answer may be obtained by counting the number of
inequivalent automata with *k* states and comparing it with the number
of systems of two automata with *m* and *n* states respectively and
*r* and *s* signals going in the respective directions. The result is
not worth working out in detail, but tells us that only a few of the
*k* state automata admit such a decomposition with *r* and *s* small
compared to *m* and *n*. Therefore, if an automaton happens to admit
such a decomposition it is very unusual for it to admit a second such
decomposition that is not equivalent to the first with respect to some
renaming of states. Applying this argument to the real world, we may
say that it is overwhelmingly probable that our customary
decomposition of the world automaton into separate people and things
has a unique, objective and usually preferred status. Therefore, the
notions of *can*, of causality, and of counterfactual associated with
this decomposition also have a preferred status.

These considerations are similar to those used by Shannon, [Shannon 1938] to find lower bounds on the number of relay contacts required on the average to realize a boolean function.

An automaton can do various things. However, the automaton model proposed so far does not involve consciousness of the choices available. This requires that the automata be given a mental structure in which facts are represented by sentences. This is better done in a more sophisticated model than finite automata. We start on it in the next section.

Sun Nov 21 23:39:43 PST 1999