The fundamental difference in point of view between this paper and most philosophy is that we are motivated by the problem of designing an artificial intelligence. Therefore, our attitude towards a concept like belief is determined by trying to decide what ways of acquiring and using beliefs will lead to intelligent behavior. Then we discover that much that one intelligence can find out about another can be expressed by ascribing beliefs to it.
A negative view of empiricism seems dictated from the apparent artificiality of designing an empiricist computer program to operate in the real world. Namely, we plan to provide our program with certain senses, but we have no way of being sure that the world in which we are putting the machine is constructable from the sense impressions it will have. Whether it will ever know some fact about the world is contingent, so we are not inclined to build into it the notion that what it can't know about doesn't exist.
The philosophical views most sympathetic to our approach are some expressed by Carnap in some of the discursive sections of (Carnap 1956).
Hilary Putnam (1961) argues that the classical mind-body problems are just as acute for machines as for men. Some of his arguments are more explicit than any given here, but in that paper, he doesn't try to solve the problems for machines.
D.M. Armstrong (1968) ``attempts to show that there are no valid philosophical or logical reasons for rejecting the identification of mind and brain.'' He does this by proposing definitions of mental concepts in terms of the state of the brain. Fundamentally, I agree with him and think that such a program of definition can be carried out, but it seems to me that his methods for defining mental qualities as brain states are too weak even for defining properties of computer programs. While he goes beyond behavioral definitions as such, he relies on dispositional states.
This paper is partly an attempt to do what Ryle (1949) says can't be done and shouldn't be attempted--namely to define mental qualities in terms of states of a machine. The attempt is based on methods of which he would not approve; he implicitly requires first order definitions, and he implicitly requires that definitions be made in terms of the state of the world and not in terms of approximate theories.
His final view of the proper subject matter of epistemology is too narrow to help researchers in artificial intelligence. Namely, we need help in expressing those facts about the world that can be obtained in an ordinary situation by an ordinary person. The general facts about the world will enable our program to decide to call a travel agent to find out how to get to Boston.
Donald Davidson (1973) undertakes to show, ``There is no important sense in which psychology can be reduced to the physical sciences''. He proceeds by arguing that the mental qualities of a hypothetical artificial man could not be defined physically even if we knew the details of its physical structure.
One sense of Davidson's statement does not require the arguments he gives. There are many universal computing elements--relays, neurons, gates and flip-flops, and physics tells us many ways of constructing them. Any information processing system that can be constructed of one kind of element can be constructed of any other. Therefore, physics tells us nothing about what information processes exist in nature or can be constructed. Computer science is no more reducible to physics than is psychology.
However, Davidson also argues that the mental states of an organism are not describable in terms of its physical structure, and I take this to assert also that they are not describable in terms of its construction from logical elements. I would take his arguments as showing that mental qualities don't have what I have called first order structural definitions. I don't think they apply to second order definitions.
D.C. Dennett (1971) expresses views very similar to mine about the reasons for ascribing mental qualities to machines. However, the present paper emphasizes criteria for ascribing particular mental qualities to particular machines rather than the general proposition that mental qualities may be ascribed. I think that the chess programs Dennett discusses have more limited mental structures than he seems to ascribe to them. Thus their beliefs almost always concern particular positions, and they believe almost no general propositions about chess, and this accounts for many of their weaknesses. Intuitively, this is well understood by researchers in computer game playing, and providing the program with a way of representing general facts about chess and even general facts about particular positions is a major unsolved problem. For example, no present program can represent the assertion ``Black has a backward pawn on his Q3 and white may be able to cramp black's position by putting pressure on it''. Such a representation would require rules that permit such a statement to be derived in appropriate positions and would guide the examination of possible moves in accordance with it.
I would also distinguish between believing the laws of logic and merely using them (see Dennett, p. 95). The former requires a language that can express sentences about sentences and which contains some kind of reflection principle. Many present problem solving programs can use modus ponens but cannot reason about their own ability to use new facts in a way that corresponds to believing modus ponens.