next up previous
Next: References


John McCarthy
Computer Science Department
Stanford University
Stanford, CA 94305

JanFebMarAprMayJun JulAugSepOctNovDec , :< 10 0


CS323 covers primarily logical AI with emphasis on the epistemology (often called ontology these days) and nonmonotonic reasoning. The first part of the course will be on the theory of events and actions and will use [Sha97], Solving the Frame Problem by Murray Shanaha as a text.

I'm not sure how rigorously I will follow this syllabus. There will definitely be some additional readings, and some topics may take longer to cover than planned.

The first lectures will follow the preface, introduction and first chapter of Shanahan, [McC99c], my note on the philosophical and scientific presuppositions of logical AI and [Lif96a], Vladimir Lifschitz's notes reminding us of the intended and unintended models of logical axiom sets.

The approximately 19 lectures will cover topics approximately as follows:

  1. Approaches to AI. Biological approach that imitates the human, computer science that looks at the problems the world presents. The logic approach that emphasizes facts more than programs. Epistemology and heuristics. Some history. Nonmonotonic reasoning, context and other extensions to mathematical logic. Importance of the common sense informatic situation. How far to human-level AI? Reading: preface and introduction of Shanahan book, [McC59], [McC99c], Optional: [McC96a].
  2. Logic for AI (Lifschitz's simple blocks world example). Logical languages, interpretations, models. Intended and unintended models. Preread: [Lif87],[Lif96a],[Lif96b].
  3. Situation Calculus: [MH69] is the original reference, but this year we'll follow Shanahan.. Preread: Chapter 1 and 2 of Shanahan up to p.47.
  4. Continuation of blocks world formalization. Frames as objects. (not in Shanahan). Exercises on situation calculus.
  5. Logical Foundations including circumscription. The rest of chapter 2 and 3 and 4 of Shanahan.
  6. More on theories of action. Up to page 200 of Shanahan if there is time.
  7. Philosophical issues. Semantics of ``can''. The practice of AI requires taking sides in certain longstanding philosophical controversies. For example, a person designing a computer program to learn about the world has to regard the world as more than a construct in the ``mind'' of the computer program. Otherwise, how could he compare the beliefs the program will come to have with facts about the world. Lots more philosophy is involved, and it seems to me that a whole area of AI research is based on a wrong philosophy. See [McC99a].

    Preread: (1) [McC99c], (2) section on ``can'' from [MH69], (3) [McC79a].

  8. Contexts as formal objects. It is a truism that the meanings of sentences and terms depend on context. The meaning of a context is itself contextual, and you can carry this as far as seems useful. The innovation here is a formal theory of the relations of different contexts and how meanings depend on context. Preread: [McC93]
  9. Elaboration tolerance. Preread: [McC99b], This is a new topic, but it relates to the continual computer science desire for modularity. We are studying how to make logical formalisms that allow changes in axiomatizations of phenomena without having to start all over. Our Drosophila is the missionary and cannibals problem. The point is to see what variants can be made purely by adding sentences to the original statement.
  10. Approximate concepts and approximate theories. Many important concepts, perhaps most, are intrinsically approximate in that they cannot be given if-and-only-if definitions. We study how statements involving approximate concepts can have definite truth values. The semantics of approximate concepts may be different from what is standard in mathematical logic. [McC00].
  11. First order theories of individual concepts and propositions. We treat individual concepts and propositions as first class objects. This lets us say more about them than can be said using them just as terms and formulas of first order logic. Preread: [McC79b]
  12. Heuristics: We would like to have a declarative theory of heuristics so programs can reason about them. Most likely we won't have more than illustrative examples of heuristics and hits at making them into objects. Reading to come, maybe.
  13. Formalization of knowledge Preread: [Hal95], [McC78].
  14. Introspection for Robots. Like people, robots will need to think about their own mental states, e.g. about their own intentions. Preread: [McC96b]

next up previous
Next: References

Aarati Parmar
Thu Sep 28 11:54:38 PDT 2000