next up previous
Next: Reasoning programs and the Up: PHILOSOPHICAL QUESTIONS Previous: PHILOSOPHICAL QUESTIONS

Why Artificial Intelligence Needs Philosophy

The idea of an intelligent machine is old, but serious work on the artificial intelligence problem or even serious understanding of what the problem is awaited the stored program computer. We may regard the subject of artificial intelligence as beginning with Turing's article Computing Machinery and Intelligence (Turing 1950) and with Shannon's (1950) discussion of how a machine might be programmed to play chess.

Since that time, progress in artificial intelligence has been mainly along the following lines. Programs have been written to solve a class of problems that give humans intellectual difficulty: examples are playing chess or checkers, proving mathematical theorems, transforming one symbolic expression into another by given rules, integrating expressions composed of elementary functions, determining chemical compounds consistent with mass-spectrographic and other data. In the course of designing these programs intellectual mechanisms of greater or lesser generality are identified sometimes by introspection, sometimes by mathematical analysis, and sometimes by experiments with human subjects. Testing the programs sometimes leads to better understanding of the intellectual mechanisms and the identification of new ones.

An alternative approach is to start with the intellectual mechanisms (for example, memory, decision-making by comparisons of scores made up of weighted sums of sub-criteria, learning, tree-search, extrapolation) and make up problems that exercise these mechanisms.

In our opinion the best of this work has led to increased understanding of intellectual mechanisms and this is essential for the development of artificial intelligence even though few investigators have tried to place their particular mechanism in the general context of artificial intelligence. Sometimes this is because the investigator identifies his particular problem with the field as a whole; he thinks he sees the woods when in fact he is looking at a tree. An old but not yet superseded discussion on intellectual mechanisms is in Minsky (1961); see also Newell's (1965) review of the state of artificial intelligence.

There have been several attempts to design intelligence with the same kind of flexibility as that of a human. This has meant different things to different investigators, but none has met with much success even in the sense of general intelligence used by the investigator in question. Since our criticism of this work will be that it does not face the philosophical problems discussed in this paper, we shall postpone discussing it until a concluding section. However, we are obliged at this point to present our notion of general intelligence.

It is not difficult to give sufficient conditions for general intelligence. Turing's idea that the machine should successfully pretend to a sophisticated observer to be a human being for half an hour will do. However, if we direct our efforts towards such a goal our attention is distracted by certain superficial aspects of human behaviour that have to be imitated. Turing excluded some of these by specifying that the human to be imitated is at the end of a teletype line, so that voice, appearance, smell, etc., do not have to be considered. Turing did allow himself to be distracted into discussing the imitation of human fallibility in arithmetic, laziness, and the ability to use the English language.

However, work on artificial intelligence, especially general intelligence, will be improved by a clearer idea of what intelligence is. One way is to give a purely behavioural or black-box definition. In this case we have to say that a machine is intelligent if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment. This definition seems vague; perhaps it can be made somewhat more precise without departing from behavioural terms, but we shall not try to do so.

Instead, we shall use in our definition certain structures apparent to introspection, such as knowledge of facts. The risk is twofold: in the first place we might be mistaken in our introspective views of our own mental structure; we may only think we use facts. In the second place there might be entities which satisfy behaviourist criteria of intelligence but are not organized in this way. However, we regard the construction of intelligent machines as fact manipulators as being the best bet both for constructing artificial intelligence and understanding natural intelligence.

We shall, therefore, be interested in an intelligent entity that is equipped with a representation or model of the world. On the basis of this representation a certain class of internally posed questions can be answered, not always correctly. Such questions are

1. What will happen next in a certain aspect of the situation?

2. What will happen if I do a certain action?

3. What is 3 + 3?

4. What does he want?

5. Can I figure out how to do this or must I get information from someone else or something else?

The above are not a fully representative set of questions and we do not have such a set yet.

On this basis we shall say that an entity is intelligent if it has an adequate model of the world (including the intellectual world of mathematics, understanding of its own goals and other mental processes), if it is clever enough to answer a wide variety of questions on the basis of this model, if it can get additional information from the external world when required, and can perform such tasks in the external world as its goals demand and its physical abilities permit.

According to this definition intelligence has two parts, which we shall call the epistemological and the heuristic. The epistemological part is the representation of the world in such a form that the solution of problems follows from the facts expressed in the representation. The heuristic part is the mechanism that on the basis of the information solves the problem and decides what to do. Most of the work in artificial intelligence so far can be regarded as devoted to the heuristic part of the problem. This paper, however, is entirely devoted to the epistemological part.

Given this notion of intelligence the following kinds of problems arise in constructing the epistemological part of an artificial intelligence:

1. What kind of general representation of the world will allow the incorporation of specific observations and new scientific laws as they are discovered?

2. Besides the representation of the physical world what other kinds of entities have to be provided for? For example, mathematical systems, goals, states of knowledge.

3. How are observations to be used to get knowledge about the world, and how are the other kinds of knowledge to be obtained? In particular what kinds of knowledge about the system's own state of mind are to be provided for?

4. In what kind of internal notation is the system's knowledge to be expressed?

These questions are identical with or at least correspond to some traditional questions of philosophy, especially in metaphysics, epistemology and philosophic logic. Therefore, it is important for the research worker in artificial intelligence to consider what the philosophers have had to say.

Since the philosophers have not really come to an agreement in 2500 years it might seem that artificial intelligence is in a rather hopeless state if it is to depend on getting concrete enough information out of philosophy to write computer programs. Fortunately, merely undertaking to embody the philosophy in a computer program involves making enough philosophical presuppositions to exclude most philosophy as irrelevant. Undertaking to construct a general intelligent computer program seems to entail the following presuppositions:

1. The physical world exists and already contains some intelligent machines called people.

2. Information about this world is obtainable through the senses and is expressible internally.

3. Our common-sense view of the world is approximately correct and so is our scientific view.

4. The right way to think about the general problems of metaphysics and epistemology is not to attempt to clear one's own mind of all knowledge and start with `Cogito ergo sum' and build up from there. Instead, we propose to use all of our knowledge to construct a computer program that knows. The correctness of our philosophical system will be tested by numerous comparisons between the beliefs of the program and our own observations and knowledge. (This point of view corresponds to the presently dominant attitude towards the foundations of mathematics. We study the structure of mathematical systems--from the outside as it were--using whatever metamathematical tools seem useful instead of assuming as little as possible and building up axiom by axiom and rule by rule within a system.)

5. We must undertake to construct a rather comprehensive philosophical system, contrary to the present tendency to study problems separately and not try to put the results together.

6. The criterion for definiteness of the system becomes much stronger. Unless, for example, a system of epistemology allows us, at least in principle, to construct a computer program to seek knowledge in accordance with it, it must be rejected as too vague.

7. The problem of `free will' assumes an acute but concrete form. Namely, in common-sense reasoning, a person often decides what to do by evaluating the results of the different actions he can do. An intelligent program must use this same process, but using an exact formal sense of can, must be able to show that it has these alternatives without denying that it is a deterministic machine.

8. The first task is to define even a naive, common-sense view of the world precisely enough to program a computer to act accordingly. This is a very difficult task in itself.

We must mention that there is one possible way of getting an artificial intelligence without having to understand it or solve the related philosophical problems. This is to make a computer simulation of natural selection in which intelligence evolves by mutating computer programs in a suitably demanding environment. This method has had no substantial success so far, perhaps due to inadequate models of the world and of the evolutionary process, but it might succeed. It would seem to be a dangerous procedure, for a program that was intelligent in a way its designer did not understand might get out of control. In any case, the approach of trying to make an artificial intelligence through understanding what intelligence is, is more congenial to the present authors and seems likely to succeed sooner.


next up previous
Next: Reasoning programs and the Up: PHILOSOPHICAL QUESTIONS Previous: PHILOSOPHICAL QUESTIONS

John McCarthy
Mon Apr 29 19:20:41 PDT 1996