next up previous
Next: OTHER VIEWS ABOUT MIND Up: ASCRIBING MENTAL QUALITIES TO Previous: EXAMPLES OF SYSTEMS WITH

GLOSSARY OF MENTAL QUALITIES

In this section we give short ``definitions'' for machines of a collection of mental qualities. We include a number of terms which give us difficulty with an indication of what the difficulties seem to be. We emphasize the place of these concepts in the design of intelligent robots.

5.1. Introspection and Self-Knowledge

We say that a machine introspects when it comes to have beliefs about its own mental state. A simple form of introspection takes place when a program determines whether it has certain information and if not asks for it. Often an operating system will compute a check sum of itself every few minutes to verify that it hasn't been changed by a software or hardware malfunction.

In principle, introspection is easier for computer programs than for people, because the entire memory in which programs and data are stored is available for inspection. In fact, a computer program can be made to predict how it would react to particular inputs provided it has enough free storage to perform the calculation. This situation smells of paradox, and there is one. Namely, if a program could predict its own actions in less time than it takes to carry out the action, it could refuse to do what it has predicted for itself. This only shows that self-simulation is necessarily a slow process, and this is not surprising.

However, present programs do little interesting introspection. This is just a matter of the undeveloped state of artificial intelligence; programmers don't yet know how to make a computer program look at itself in a useful way.

5.2. Consciousness and Self-Consciousness

Suppose we wish to distinguish the self-awareness of a machine, animal or person from its awareness of other things. We explicate awareness as belief in certain sentences, so in this case we are want to distinguish those sentences or those terms in the sentences that may be considered to be about the self. We also don't expect that self-consciousness will be a single property that something either has or hasn't but rather there will be many kinds of self-awareness with humans posessing many of the kinds we can imagine.

Here are some of the kinds of self-awareness:

5.2.1. Certain predicates of the situation (propositional fluents in the terminology of (McCarthy and Hayes 1969)) are directly observable in almost all situations while others often must be inferred. The almost always observable fluents may reasonably be identified with the senses. Likewise the values of certain fluents are almost always under the control of the being and can be called motor parameters for lack of a common language term. We have in mind the positions of the joints. Most motor parameters are both observable and controllable. I am inclined to regard the posession of a substantial set of such constantly observable or controllable fluents as the most primitive form of self-consciousness, but I have no strong arguments against someone who wished to require more.

5.2.2. The second level of self-consciousness requires a term I in the language denoting the self. I should belong to the class of persistent objects and some of the same predicates should be applicable to it as are applicable to other objects. For example, like other objects I has a location that can change in time. I is also visible and impenetrable like other objects. However, we don't want to get carried away in regarding a physical body as a necessary condition for self-consciousness. Imagine a distributed computer whose sense and motor organs could also be in a variety of places. We don't want to exclude it from self-consciousness by definition.

5.2.3. The third level comes when I is regarded as an actor among others. The conditions that permit I to do something are similar to the conditions that permit other actors to do similar things.

5.2.4. The fourth level requires the applicability of predicates such as believes, wants and can to I. Beliefs about past situations and the ability to hypothesize future situations are also required for this level.

5.3. Language and Thought

Here is a hypothesis arising from artificial intelligence concerning the relation between language and thought. Imagine a person or machine that represents information internally in a huge network. Each node of the network has references to other nodes through relations. (If the system has a variable collection of relations, then the relations have to be represented by nodes, and we get a symmetrical theory if we suppose that each node is connected to a set of pairs of other nodes). We can imagine this structure to have a long term part and also extremely temporary parts representing current thoughts. Naturally, each being has its own network depending on its own experience. A thought is then a temporary node currently being referenced by the mechanism of consciousness. Its meaning is determined by its references to other nodes which in turn refer to yet other nodes. Now consider the problem of communicating a thought to another being.

Its full communication would involve transmitting the entire network that can be reached from the given node, and this would ordinarily constitute the entire experience of the being. More than that, it would be necessary to also communicate the programs that take action on the basis of encountering certain nodes. Even if all this could be transmitted, the recipient would still have to find equivalents for the information in terms of its own network. Therefore, thoughts have to be translated into a public language before they can be communicated.

A language is also a network of associations and programs. However, certain of the nodes in this network (more accurately a family of networks, since no two people speak precisely the same language) are associated with words or set phrases. Sometimes the translation from thoughts to sentences is easy, because large parts of the private networks are taken from the public network, and there is an advantage in preserving the correspondence. However, the translation is always approximate (in sense that still lacks a technical definition), and some areas of experience are difficult to translate at all. Sometimes this is for intrinsic reasons, and sometimes because particular cultures don't use language in this area. (It is my impression that cultures differ in the extent to which information about facial appearance that can be used for recognition is verbally transmitted). According to this scheme, the ``deep structure'' of a publicly expressible thought is a node in the public network. It is translated into the deep structure of a sentence as a tree whose terminal nodes are the nodes to which words or set phrases are attached. This ``deep structure'' then must be translated into a string in a spoken or written language.

The need to use language to express thought also applies when we have to ascribe thoughts to other beings, since we cannot put the entire network into a single sentence.

5.4. Intentions

We are tempted to say that a machine intends to perform an action when it believes it will and also believes that it could do otherwise. However, we will resist this temptation and propose that a predicate intends(actor,action,state) be suitably axiomatized where one of the axioms says that the machine intends the action if it believes it will perform the action and could do otherwise. Armstrong (1968) wants to require an element of servo-mechanism in order that a belief that an action will be performed be regarded as an intention, i.e. there should be a commitment to do it one way or another. There may be good reasons to allow several versions of intention to co-exist in the same formalism.

5.5. Free Will

When we program a computer to make choices intelligently after determining its options, examining their consequences, and deciding which is most favorable or most moral or whatever, we must program it to take an attitude towards its freedom of choice essentially isomorphic to that which a human must take to his own. A program will have to take such an attitude towards another unless it knows the details of the other's construction and present state.

We can define whether a particular action was free or forced relative to a theory that ascribes beliefs and within which beings do what they believe will advance their goals. In such a theory, action is precipitated by a belief of the form I should do X now. We will say that the action was free if changing the belief to I shouldn't do X now would have resulted in the action not being performed. This requires that the theory of belief have sufficient Cartesian product structure so that changing a single belief is defined, but it doesn't require defining what the state of the world would be if a single belief were different.

It may be possible to separate the notion of a free action into a technical part and a controversial part. The technical part would define freedom relative to an approximate co-ordinate system giving the necessary Cartesian product structure. Relative to the co-ordinate system, the freedom of a particular action would be a technical issue, but people could argue about whether to accept the whole co-ordinate system.

This isn't the whole free will story, because moralists are also concerned with whether praise or blame may be attributed to a choice. The following considerations would seem to apply to any attempt to define the morality of actions in a way that would apply to machines:

5.5.1. There is unlikely to be a simple behavioral definition. Instead there would be a second order definition criticizing predicates that ascribe morality to actions.

5.5.2 The theory must contain at least one axiom of morality that is not just a statement of physical fact. Relative to this axiom, moral judgments of actions can be factual.

5.5.3. The theory of morality will presuppose a theory of belief in which statements of the form ``It believed the action would harm someone'' are defined. The theory must ascribe beliefs about others' welfare and perhaps about the being's own welfare.

5.5.4. It might be necessary to consider the machine as imbedded in some kind of society in order to ascribe morality to its actions.

5.5.5. No present machines admit such a belief structure, and no such structure may be required to make a machine with arbitrarily high intelligence in the sense of problem-solving ability.

5.5.6. It seems unlikely that morally judgeable machines or machines to which rights might legitimately be ascribed should be made if and when it becomes possible to do so.

5.6. Understanding

It seems to me that understanding the concept of understanding is fundamental and difficult. The first difficulty lies in determining what the operand is. What is the ``theory of relativity'' in ``Pat understands the theory of relativity''? What does ``misunderstand'' mean? It seems that understanding should involve knowing a certain collection of facts including the general laws that permit deducing the answers to questions. We probably want to separate understanding from issues of cleverness and creativity.

5.7. Creativity

This may be easier than ``understanding'' at least if we confine our attention to reasoning processes. Many problem solutions involve the introduction of entities not present in the statement of the problem. For example, proving that an 8 by 8 square board with two diagonally opposite squares removed cannot be covered by dominos each covering two adjacent squares involves introducing the colors of the squares and the fact that a domino covers two squares of opposite color. We want to regard this as a creative proof even though it might be quite easy for an experienced combinatorist.


next up previous
Next: OTHER VIEWS ABOUT MIND Up: ASCRIBING MENTAL QUALITIES TO Previous: EXAMPLES OF SYSTEMS WITH

John McCarthy
Fri Dec 21 12:19:53 PST 2001