...them
(McCarthy and Hayes 1969) defines an epistemologically adequate representation of information as one that can express the information actually available to a subject under given circumstances. Thus when we see a person, parts of him are occluded, and we use our memory of previous looks at him and our general knowledge of humans to finish of a ``picture'' of him that includes both two and three dimensional information. We must also consider metaphysically adequate representations that can represent complete facts ignoring the subject's ability to acquire the facts in given circumstances. Thus Laplace thought that the positions and velocities of the particles in the universe gave a metaphysically adequate representation. Metaphysically adequate representations are needed for scientific and other theories, but artificial intelligence and a full philosophical treatment of common sense experience also require epistemologically adequate representations. This paper might be summarized as contending that mental concepts are needed for an epistemologically adequate representation of facts about machines, especially future intelligent machines.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...intelligence
Work in artificial intelligence is still far from showing how to reach human-level intellectual performance. Our approach to the AI problem involves identifying the intellectual mechanisms required for problem solving and describing them precisely. Therefore we are at the end of the philosophical spectrum that requires everything to be formalized in mathematical logic. It is sometimes said that one studies philosophy in order to advance beyond one's untutored naive world-view, but unfortunately for artificial intelligence, no-one has yet been able to give a description of even a naive world-view, complete and precise enough to allow a knowledge-seeking program to be constructed in accordance with its tenets.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...intelligent
Present AI programs operate in limited domains, e.g. play particular games, prove theorems in a particular logical system, or understand natural language sentences covering a particular subject matter and with other semantic restrictions. General intelligence will require general models of situations changing in time, actors with goals and strategies for achieving them, and knowledge about how information can be obtained.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...structures
Our opinion is that human intellectual structure is substantially determined by the intellectual problems humans face. Thus a Martian or a machine will need similar structures to solve similar problems. Dennett (1971) expresses similar views. On the other hand, the human motivational structure seems to have many accidental features that might not be found in Martians and that we would not be inclined to program into machines. This is not the place to present arguments for this viewpoint. 

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...behavior
Behavioral definitions are often favored in philosophy. A system is defined to have a certain quality if it behaves in a certain way or is disposed to behave in a certain way. Their virtue is conservatism; they don't postulate internal states that are unobservable to present science and may remain unobservable. However, such definitions are awkward for mental qualities, because, as common sense suggests, a mental quality may not result in behavior, because another mental quality may prevent it; e.g. I may think you are thick-headed, but politeness may prevent my saying so. Particular difficulties can be overcome, but an impression of vagueness remains. The liking for behavioral definitions stems from caution, but I would interpret scientific experience as showing that boldness in postulating complex structures of unobserved entities-provided it is accompanied by a willingness to take back mistakes-is more likely to be rewarded by understanding of and control over nature than is positivistic timidity. It is particularly instructive to imagine a determined behaviorist trying to figure out an electronic computer. Trying to define each quality behaviorally would get him nowhere; only simultaneously postulating a complex structure including memory, arithmetic unit, control structure, and input-output would yield predictions that could be compared with experiment.

There is a sense in which operational definitions are not taken seriously even by their proposers. Suppose someone gives an operational definition of length (e.g. involving a certain platinum bar), and a whole school of physicists and philosophers becomes quite attached to it. A few years later, someone else criticizes the definition as lacking some desirable property, proposes a change, and the change is accepted. This is normal, but if the original definition expressed what they really meant by the length, they would refuse to change, arguing that the new concept may have its uses, but it isn't what they mean by ``length''. This shows that the concept of ``length'' as a property of objects is more stable than any operational definition.

Carnap has an interesting section in Meaning and Necessity entitled ``The Concept of Intension for a Robot'' in which he makes a similar point saying, ``It is clear that the method of structural analysis, if applicable, is more powerful than the behavioristic method, because it can supply a general answer, and, under favorable circumstances, even a complete answer to the question of the intension of a given predicate.''

The clincher for AI, however, is an ``argument from design''. In order to produce desired behavior in a computer program, we build certain mental qualities into its structure. This doesn't lead to behavioral characterizations of the qualities, because the particular qualities are only one of many ways we might use to get the desired behavior, and anyway the desired behavior is not always realized.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...definitions
Putnam (1970) also proposes what amounts to second order definitions for psychological properties.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...legitimate
Whether a system has beliefs and other mental qualities is not primarily a matter of complexity of the system. Although cars are more complex than thermostats, it is hard to ascribe beliefs or goals to them, and the same is perhaps true of the basic hardware of a computer, i.e. the part of the computer that executes the program without the program itself.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...hot.
1999 footnote: Beliefs about the room being too hot, etc. are ascribed to the thermostat embedded in it location and connected appropriately. The thermostat on the shelf in the hardware store has no beliefs yet and might be used in such a way that it would have quite different beliefs.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...house
My house at the time the paper was first written.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...objects.
1999: Tom Costello pointed out to me that a simple system can sometimes be ascribed some introspective knowledge. Namely, an electronic alarm clock getting power after being without power can be said to know that it doesn't know the time. It asks to be reset by blinking its display. The usual alarm clock can be understood just as well by the design stance as by the intentional stance. However, we can imagine an alarm clock that had an interesting strategy for getting the time after the end of a power failure. In that case, the ascription of knowledge of non-knowledge might be the best way of understanding that part of the state.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...HREF="footnode.html#109">gif
2001: The United Airlines flight informations system said to me, ``For what city to you want arrival information?'' I said, ``San Francisco'', to which it replied, ``I think you said San Francisco. If that is correct, say yes''. People with qualms about machines saying ``I'' or ``I think'' are invited suggest what the flight information system should have said.''
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...automaton
Our own ability to derive the laws of higher levels of organization from knowledge of lower level laws is also limited by universality. While the presently accepted laws of physics allow only one chemistry, the laws of physics and chemistry allow many biologies, and, because the neuron is a universal computing element, an arbitrary mental structure is allowed by basic neurophysiology. Therefore, to determine human mental structure, one must make psychological experiments, or determine the actual anatomical structure of the brain and the information stored in it. One cannot determine the structure of the brain merely from the fact that the brain is capable of certain problem solving performance. In this respect, our position is similar to that of the Life robot.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

John McCarthy
Fri Dec 21 12:19:53 PST 2001