next up previous
Next: Common Sense in Lenat's Up: Book Review Previous: Progress in Logic-Based AI

The Future of Logic Based AI

The review editors asked me to say what I think the obstacles are to human-level AI by the logic route and why I think they can be overcome. If anyone could make a complete list of the obstacles, this would be a major step towards overcoming them. What I can actually do is much more tentative.

Workers in logic-based AI hope to reach human-level in a logic based system. Such a system would, as proposed in [McCarthy, 1959], represent what it knew about the world in general, about the particular situation and about its goals by sentences in logic. Other data structures, e.g. for representing pictures, would be present together with programs for creating them, manipulating them and for getting sentences for describing them. The program would perform actions that it inferred were appropriate for achieving its goals.

Logic-based AI is the most ambitious approach to AI, because it proposes to understand the common sense world well enough to express what is required for successful action in formulas. Other approaches to AI do not require this. Anything based on neural nets, for example, hopes that a net can be made to learn human-level capability without the people who design the original net knowing much about the world in which their creation learns. Maybe this will work, but then they may have an intelligent machine and still not understand how it works. This prospect seems to appeal to some people.

Common sense knowledge and reasoning is at the core of AI, because a human or an intelligent machine always starts from a situation in which the information available to it has a common sense character. Mathematical models of the traditional kind are imbedded in common sense. This was not obvious, and many scientists supposed that the development of mathematical theories would obviate the need for common sense terminology in scientific work. Here are two quotations that express this attitude.

One service mathematics has rendered to the human race. It has put common sense back where it belongs, on the topmost shelf next to the dusty canister labelled `discarded nonsense'. --E. T. Bell

All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word `cause' never occurs ... The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm tex2html_wrap_inline136 .--B. Russell, ``On the Notion of Cause'', Proceedings of the Aristotelian Society, 13 (1913), pp. 1-26.

The ``Nemesis'' theory of the mass extinctions holds that our sun has a companion star that every 13 million years comes close enough to disrupt the Oort cloud of comets, some of which then come into the inner solar system and bombard the earth causing extinctions. The Nemesis theory involves gravitational astronomy, but it doesn't propose a precise orbit for the star Nemesis and still less proposes orbits for the comets in the Oort cloud. Therefore, the theory is formulated in terms of the common sense notion of causality.

It was natural for Russell and Bell to be pleased that mathematical laws were available for certain phenomena that had previously been treated only informally. However, they were interested in a hypothetical information situation in which the scientist has a full knowledge of an initial configuration, e.g. in celestial mechanics, and needs to predict the future. It is only when people began to work on AI that it became clear that general intelligence requires machines that can handle the common sense information situation in which concepts like ``causes'' are appropriate. Even after that it took 20 years before it was apparent that nonmonotonic reasoning could be and had to be formalized.

Making a logic-based human-level program requires enough progress on at least the following problems:

extensions of mathematical logic
Besides nonmonotonic reasoning other problems in the logic of AI are beginning to take a definite form including formalization of contexts as objects. This can provide a logical way of matching the human ability to use language in different ways depending on context. [McCarthy, 1987], [Guha, 1991], [McCarthy, 1993], [Buvac and Mason, 1993].

elaboration tolerance
Formalisms need to be elaboratable without a human having to start the formalism over from the beginning. There are ideas but no articles as yet.

concurrent events
[Gelfond, Lifschitz and Rabinov, 1991] treats this using the situation calculus, and I have some recent and still unpublished results aimed at a simpler treatment.

intentionality
The treatment of mental objects such as beliefs (much discussed in the philsophical literature) and the corresponding term concept, e.g. ``what he thinks electrons are'' (hardly at all discussed in the formal literature).

reification
We need a better understanding of what are to be considered objects in the logic. For example, a full treatment of the missionaries-and-cannibals problem together with reasonable elaborations must allow us to say, ``There are just two things wrong with the boat.''

introspection and transcendence
Human intelligence has the ability to survey, in some sense, the whole of its activity, and to consider any assumption, however built-in, as subject to question. Humans aren't really very good at this, and it is only needed for some very high level problems. Nevertheless, we want it, and there are some ideas about how to get it. What may work is to use the context mechanism as discussed in [McCarthy, 1993] to go beyond the outermost context considered so far.

Unfortunately, too many people concentrated on self-referential sentences. It's a cute subject, but not relevant to human introspection or to the kinds of introspection we will have to make computers do.

levels of description
If one is asked how an event occurred, one can often answer by giving a sequence of lower level events that answer the question for the particular occurrence. Once I bought some stamps by going to the stamp selling machine in the airport and putting in six dollars, etc. Each of these subevents has a how, but I didn't plan them, and cannot recall them. A stamp buying coach would have analyzed them to a lower level than I could and would be able to teach me how to buy stamps more effectively. For AI we therefore need a more flexible notion than the computer science theories of how programs are built up from elementary operations.

Dreyfus asks why anyone should believe all this can be done. It seems as good a bet as any other difficult scientific problem. Recently progress has become more rapid, and many people have entered the field of logical AI in the last 15 years. Besides those whose papers I referenced, these include Raymond Reiter, Leora Morgenstern, Donald Perlis, Ernest Davis, Murray Shanahan, David Etherington, Yoav Shoham, Fangzhen Lin, Sarit Kraus, Matthew Ginsberg, Douglas Lenat, R. V. Guha, Hector Levesque, Jack Minker, Tom Costello, Erik Sandewall, Kurt Konolige and many others. There aren't just a few ``die-hards''.

However, reaching human level AI is not a problem that is within engineering range of solution. Very likely, fundamental scientific discoveries are still to come.


next up previous
Next: Common Sense in Lenat's Up: Book Review Previous: Progress in Logic-Based AI

John McCarthy
Tue Jun 13 01:06:06 PDT 2000