next up previous
Next: Modal Logic Up: SOME PHILOSOPHICAL PROBLEMS FROM Previous: Parallel Processing

DISCUSSION OF LITERATURE

 

The plan for achieving a generally intelligent program outlined in this paper will clearly be difficult to carry out. Therefore, it is natural to ask if some simpler scheme will work, and we shall devote this section to criticising some simpler schemes that have been proposed.

1. L. Fogel (1966) proposes to evolve intelligent automata by altering their state transition diagrams so that they perform better on tasks of greater and greater complexity. The experiments described by Fogel involve machines with less than 10 states being evolved to predict the next symbol of a quite simple sequence. We do not think this approach has much chance of achieving interesting results because it seems limited to automata with small numbers of states, say less than 100, whereas computer programs regarded as automata have tex2html_wrap_inline1257 to tex2html_wrap_inline1259 states. This is a reflection of the fact that, while the representation of behaviours by finite automata is metaphysically adequate--in principle every behaviour of which a human or machine is capable can be so represented--this representation is not epistemologically adequate; that is, conditions we might wish to impose on a behaviour, or what is learned from an experience, are not readily expresible as changes in the state diagram of an automaton.

2. A number of investigators (Galanter 1956, Pivar and Finkelstein 1964) have taken the view that intelligence may be regarded as the ability to predict the future of a sequence from observation of its past. Presumably, the idea is that the experience of a person can be regarded as a sequence of discrete events and that intelligent people can predict the future. Artificial intelligence is then studied by writing programs to predict sequences formed according to some simple class of laws (sometimes probabilistic laws). Again the model is metaphysically adequate but epistemologically inadequate.

In other words, what we know about the world is divided into knowledge about many aspects of it, taken separately and with rather weak interaction. A machine that worked with the undifferentiated encoding of experience into a sequence would first have to solve the encoding, a task more difficult than any the sequence extrapolators are prepared to undertake. Moreover, our knowledge is not usable to predict exact sequences of experience. Imagine a person who is correctly predicting the course of a football game he is watching; he is not predicting each visual sensation (the play of light and shadow, the exact movements of the players and the crowd). Instead his prediction is on the level of: team A is getting tired; they should start to fumble or have their passes intercepted.

3. Friedberg (1958,1959) has experimented with representing behaviour by a computer program and evolving a program by random mutations to perform a task. The epistemological inadequacy of the representation is expressed by the fact that desired changes in behaviour are often not representable by small changes in the machine language form of the program. In particular, the effect on a reasoning program of learning a new fact is not so representable.

4. Newell and Simon worked for a number of years with a program called the General Problem Solver (Newell et. al. 1959, Newell and Simon 1961). This program represents problems as the task of transforming one symbolic expression into another using a fixed set of transformation rules. They succeeded in putting a fair variety of problems into this form, but for a number of problems the representation was awkward enough so that GPS could only do small examples. The task of improving GPS was studied as a GPS task, but we believe it was finally abandoned. The name, General Problem Solver, suggests that its authors at one time believed that most problems could be put in its terms, but their more recent publications have indicated other points of view.

It is interesting to compare the point of view of the present paper with that expressed in Newell and Ernst (1965) from which we quote the second paragraph:

We may consider a problem solver to be a process that takes a problem as input and provides (when successful) the solution as output. The problem consists of the problem statement, or what is immediately given, and auxiliary information, which is potentially relevant to the problem but available only as the result of processing. The problem solver has available certain methods for attempting to solve the problem. For the problem solver to be able to work on a problem it must first transform the problem statement from its external form into the internal representation. Thus (roughly), the class of problems the problem solver can convert into its internal representation determines how broad or general it is, and its success in obtaining solutions to problems in internal form determines its power. Whether or not universal, such a decomposition fits well the structure of present problem solving programs.

In a very approximate way their division of the problem solver into the input program that converts problems into internal representation and the problem solver proper corresponds to our division into the epistemological and heuristic pats of the artificial intelligence problem. The difference is that we are more concerned with the suitability of the internal representation itself.

Newell (1965) poses the problem of how to get what we call heuristically adequate representations of problems, and Simon (1966) discusses the concept of `can' in a way that should be compared with the present approach.




next up previous
Next: Modal Logic Up: SOME PHILOSOPHICAL PROBLEMS FROM Previous: Parallel Processing

John McCarthy
Mon Apr 29 19:20:41 PDT 1996