next up previous
Next: The Degenerating Research Program Up: Book Review Previous: The Future of Logic

Common Sense in Lenat's Work

Douglas Lenat is one of the few workers in AI at whose recent work Dreyfus has taken a peek.

Dreyfus, p. xvii and xviii, writes:

When, instead of developing philosophical theories of the transcendental conditions that must hold if the mind is to represent the world, or proposing psychological models of how the storage and retrieval of propositional representations works, researchers in AI actually tried to formulate and organize everyday consensus knowledge, they ran into what has come to be called the commonsense-knowledge problem. There are really at least three problems grouped under this rubric:
  1. How everyday knowledge must be organized so that one can make inferences from it.
  2. How skills or know-how can be represented as knowing-that.
  3. How relevant knowledge can be brought to bear in particular situations.

While representationalists have written programs that attempt to deal with each of these problems, there is no generally accepted solution, nor is there a proof that these problems cannot be solved. What is clear is that all attempts to solve them have run into unexpected difficulties, and this in turn suggests that there may well be in-principle limitations on representationalism. At the very least these difficulties lead us to question why anyone would expect the representationalist project to succeed.

That's not too bad a summary except for the rhetorical question at the end. Why should one expect it to be easy, and why should one expect it not to succeed eventually in reaching human level intelligence? Most of the people who have pursued the approach have seen enough of what they regard as progress to expect eventual success. I have referred to some of this progress in my account of the invention and development of formalized nonmonotonic reasoning.

Mostly I agree with what Lenat said (as Dreyfus quotes him in the book), and I don't find much support for Dreyfus's assertions that empathy rather than just verbalizable understanding is required in order to understand human action. I think the example on p. xix of what ``it'' means in

Mary saw a dog in the window. She wanted it.
is within the capability of some current parsers that use semantic and pragmatic information.

However, I think the following assertion of Lenat's [Lenat and Guha, 1990] quoted by Dreyfus on p. xxv is an oversimplification.

These layers of analogy and metaphor eventually `bottom out' at physical-somatic-primitives: up, down, forward, back, pain, ssssscold, inside, seeing, sleeping, tasting, growing, containing, moving, making noise, hearing, birth, death, strain, exhaustion, tex2html_wrap_inline136

The contact of humans (and future robots) with the common sense world is on many levels, and our concepts are on many levels. Events that might bottom out physically--as informing someone of something may physically bottom out in making a noise--often don't bottom out epistemologically. We may assert that A informed B of something without our being able to describe the act in terms of making noise or typing on a keyboard.

While I don't agree with Lenat's formulation, the success of Cyc doesn't depend on its correctness. Cyc perfectly well can (and indeed does) store information obtained on several levels of organization and used by programs interacting with the world on several levels.

All this doesn't guarantee that Cyc will succeed as a database of common sense knowledge. There may be to big a conceptual gap in the AI community's ideas of what are the usefully stored elements of common sense knowledge.


next up previous
Next: The Degenerating Research Program Up: Book Review Previous: The Future of Logic

John McCarthy
Tue Jun 13 01:06:06 PDT 2000