next up previous
Next: EPISTEMOLOGICAL PROBLEMS Up: EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL Previous: EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL

INTRODUCTION

In (McCarthy and Hayes 1969), we proposed dividing the artificial intelligence problem into two parts--an epistemological part and a heuristic part. This lecture further explains this division, explains some of the epistemological problems, and presents some new results and approaches.

The epistemological part of AI studies what kinds of facts about the world are available to an observer with given opportunities to observe, how these facts can be represented in the memory of a computer, and what rules permit legitimate conclusions to be drawn from these facts. It leaves aside the heuristic problems of how to search spaces of possibilities and how to match patterns.

Considering epistemological problems separately has the following advantages:

1. The same problems of what information is available to an observer and what conclusions can be drawn from information arise in connection with a variety of problem solving tasks.

2. A single solution of the epistemological problems can support a wide variety of heuristic approaches to a problem.

3. AI is a very difficult scientific problem, so there are great advantages in finding parts of the problem that can be separated out and separately attacked.

4. As the reader will see from the examples in the next section, it is quite difficult to formalize the facts of common knowledge. Existing programs that manipulate facts in some of the domains are confined to special cases and don't face the difficulties that must be overcome to achieve very intelligent behavior.

We have found first order logic to provide suitable languages for expressing facts about the world for epistemological research. Recently we have found that introducing concepts as individuals makes possible a first order logic expression of facts usually expressed in modal logic but with important advantages over modal logic--and so far no disadvantages.

In AI literature, the term predicate calculus is usually extended to cover the whole of first order logic. While predicate calculus includes just formulas built up from variables using predicate symbols, logical connectives, and quantifiers, first order logic also allows the use of function symbols to form terms and in its semantics interprets the equality symbol as standing for identity. Our first order systems further use conditional expressions (nonrecursive) to form terms and tex2html_wrap_inline271 -expressions with individual variables to form new function symbols. All these extensions are logically inessential, because every formula that includes them can be replaced by a formula of pure predicate calculus whose validity is equivalent to it. The extensions are heuristically nontrivial, because the equivalent predicate calculus may be much longer and is usually much more difficult to understand--for man or machine.

The use of first order logic in epistemological research is a separate issue from whether first order sentences are appropriate data structures for representing information within a program. As to the latter, sentences in logic are at one end of a spectrum of representations; they are easy to communicate, have logical consequences and can be logical consequences, and they can be meaningful in a wide context. Taking action on the basis of information stored as sentences, is slow and they are not the most compact representation of information. The opposite extreme is to build the information into hardware, next comes building it into machine language program, then a language like LISP, and then a language like MICROPLANNER, and then perhaps productions. Compiling or hardware building or ``automatic programming'' or just planning takes information from a more context independent form to a faster but more context dependent form. A clear expression of this is the transition from first order logic to MICROPLANNER, where much information is represented similarly but with a specification of how the information is to be used. A large AI system should represent some information as first order logic sentences and other information should be compiled. In fact, it will often be necessary to represent the same information in several ways. Thus a ball-player's habit of keeping his eye on the ball is built into his ``program'', but it is also explicitly represented as a sentence so that the advice can be communicated.

Whether first order logic makes a good programming language is yet another issue. So far it seems to have the qualities Samuel Johnson ascribed to a woman preaching or a dog walking on its hind legs--one is sufficiently impressed by seeing it done at all that one doesn't demand it be done well.

Suppose we have a theory of a certain class of phenomena axiomatized in (say) first order logic. We regard the theory as adequate for describing the epistemological aspects of a goal seeking process involving these phenomena provided the following criterion is satisfied:

Imagine a robot such that its inputs become sentences of the theory stored in the robot's database, and such that whenever a sentence of the form ``I should emit output X now'' appears in its database, the robot emits output X. Suppose that new sentences appear in its database only as logical consequences of sentences already in the database. The deduction of these sentences also uses general sentences stored in the database at the beginning constituting the theory being tested. Usually a database of sentences permits many different deductions to be made so that a deduction program would have to choose which deduction to make. If there was no program that could achieve the goal by making deductions allowed by the theory no matter how fast the program ran, we would have to say that the theory was epistemologically inadequate. A theory that was epistemologically adequate would be considered heuristically inadequate if no program running at a reasonable speed with any representation of the facts expressed by the data could do the job. We believe that most present AI formalisms are epistemologically inadequate for general intelligence; i.e. they wouldn't achieve enough goals requiring general intelligence no matter how fast they were allowed to run. This is because the epistemological problems discussed in the following sections haven't even been attacked yet.

The word ``epistemology'' is used in this paper substantially as many philosophers use it, but the problems considered have a different emphasis. Philosophers emphasize what is potentially knowable with maximal opportunities to observe and compute, whereas AI must take into account what is knowable with available observational and computational facilities. Even so, many of the same formalizations have both philosophical and AI interest.

The subsequent sections of this paper list some epistemological problems, discuss some first order formalizations, introduce concepts as objects and use them to express facts about knowledge, describe a new mode of reasoning called circumscription, and place the AI problem in a philosphical setting.


next up previous
Next: EPISTEMOLOGICAL PROBLEMS Up: EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL Previous: EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL

John McCarthy
Wed May 15 14:19:09 PDT 1996