To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of a machine in a particular situation may require ascribing mental qualities or qualities isomorphic to them . Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is very incompletely known.
While we will be quite liberal in ascribing some mental qualities even to rather primitive machines, we will try to be conservative in our criteria for ascribing any particular quality.
These views are motivated by work in artificial intelligence (abbreviated AI). They can be taken as asserting that many of the philosophical problems of mind take a concrete form when one takes seriously the idea of making machines behave intelligently. In particular, AI raises for machines two issues that have heretofore been considered only in connection with people.
First, in designing intelligent programs and looking at them from the outside we need to determine the conditions under which specific mental and volitional terms are applicable. We can exemplify these problems by asking when might it be legitimate to say about a machine, ``It knows I want a reservation to Boston, and it can give it to me, but it won't''.
Second, when we want a generally intelligent computer program, we must build into it a general view of what the world is like with especial attention to facts about how the information required to solve problems is to be obtained and used. Thus we must provide it with some kind of metaphysics (general world-view) and epistemology (theory of knowledge), however naive.
As much as possible, we will ascribe mental qualities separately from each other instead of bundling them in a concept of mind. This is necessary, because present machines have rather varied little minds; the mental qualities that can legitimately be ascribed to them are few and differ from machine to machine. We will not even try to meet objections like, ``Unless it also does X, it is illegitimate to speak of its having mental qualities.''
Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance. However, the machines mankind has so far found it useful to construct rarely have beliefs about beliefs, although such beliefs will be needed by computer programs that reason about what knowledge they lack and where to get it. Mental qualities peculiar to human-like motivational structures , such as love and hate, will not be required for intelligent behavior, but we could probably program computers to exhibit them if we wanted to, because our common sense notions about them translate readily into certain program and data structures. Still other mental qualities, e.g. humor and appreciation of beauty, seem much harder to model.
The successive sections of this paper will give philosophical and AI reasons for ascribing beliefs to machines, two new forms of definition that seem necessary for defining mental qualities and examples of their use, examples of systems to which mental qualities are ascribed, some first attempts at defining a variety of mental qualities, some comments on other views on mental qualities, notes, and references.
This paper is exploratory and its presentation is non-technical. Any axioms that are presented are illustrative and not part of an axiomatic system proposed as a serious candidate for AI or philosophical use. This is regrettable for two reasons. First, AI use of these concepts requires formal axiomatization. Second, the lack of formalism focusses attention on whether the paper correctly characterizes mental qualities rather than on the formal properties of the theories proposed. I think we can attain a situation like that in the foundations of mathematics, wherein the controversies about whether to take an intuitionist or classical point of view have been mainly replaced by technical studies of intuitionist and classical theories and the relations between them. In future work, I hope to treat these matters more formally along the lines of (McCarthy 1977) and (1979). This won't eliminate controversy about the true nature of mental qualities, but I believe that their eventual resolution requires more technical knowledge than is now available.