next up previous
Next: Causality Up: The finite automaton model Previous: The finite automaton model

Representing a person by a system of subautomata

 

The idea that what a person can do depends on his position rather than on his characteristics is somewhat counter-intuitive. This impression can be mitigated as follows: Imagine the person to be made up of several subautomata; the output of the outer subautomaton is the motion of the joints. If we break the connection to the world at that point we can answer questions like, `Can he fit through a given hole?' We shall get some counter-intuitive answers, however, such as that he can run at top speed for an hour or can jump over a building, since these are sequences of motions of his joints that would achieve these results.

The next step, however, is to consider a subautomaton that receives the nerve impulses from the spinal cord and transmits them to the muscles. If we break at the input to this automaton, we shall no longer say that he can jump over a building or run long at top speed since the limitations of the muscles will be taken into account. We shall, however, say that he can ride a unicycle since appropriate nerve signals would achieve this result.

The notion of can corresponding to the intuitive notion in the largest number of cases might be obtained by hypothesizing an organ of will, which makes decisions to do things and transmits these decisions to the main part of the brain that tries to carry them out and contains all the knowledge of particular facts.gif If we make the break at this point we shall be able to say that so-and-so cannot dial the President's secret and private telephone number because he does not know it, even though if the question were asked could he dial that particular number, the answer would be yes. However, even this break would not give the statement, `I cannot go without saying goodbye, because this would hurt the child's feelings'.

On the basis of these examples, one might try to postulate a sequence of narrower and narrower notions of can terminating in a notion according to which a person can do only what he actually does. This extreme notion would then be superfluous. Actually, one should not look for a single best notion of can; each of the above-mentioned notions is useful and is actually used in some circumstances. Sometimes, more than one notion is used in a single sentence, when two different levels of constraint are mentioned.

Nondeterministic systems as approximations to deterministic systems are discussed in [McCarthy 1999a]. For now we'll settle for an example involving a chess program. It can be reasoned about at various levels. Superhuman Martians can compute what it will do by looking at the initial electronic state and following the electronics. Someone with less computational power can interpret the program on another computer knowing the program and the position and determine the move that will be made. A mere human chess player may be reduced to saying that certain moves are excluded as obviously disastrous but be unable to decide which of (say) two moves the program will make. The chess player's model is a nondeterministic approximation to the program.


next up previous
Next: Causality Up: The finite automaton model Previous: The finite automaton model

John McCarthy
Sun Nov 21 23:39:43 PST 1999