SYLLABUS FOR 2000 FALL
Computer Science Department
Stanford, CA 94305
JulAugSepOctNovDec , :< 10 0
CS323 covers primarily logical AI with emphasis on the epistemology
(often called ontology these days) and nonmonotonic reasoning. The
first part of the course will be on the theory of events and actions
and will use [Sha97], Solving the Frame Problem by
Murray Shanaha as a text.
I'm not sure how rigorously I will follow this syllabus. There will
definitely be some additional readings, and some topics may take
longer to cover than planned.
The first lectures will follow the preface, introduction and first
chapter of Shanahan, [McC99c], my note on the philosophical and
scientific presuppositions of logical AI and [Lif96a],
Vladimir Lifschitz's notes reminding us of the intended and unintended
models of logical axiom sets.
The approximately 19 lectures will cover topics approximately as follows:
- Approaches to AI. Biological approach that imitates the human,
computer science that looks at the problems the world presents. The
logic approach that emphasizes facts more than programs.
Epistemology and heuristics. Some history. Nonmonotonic reasoning,
context and other extensions to mathematical logic. Importance of
the common sense informatic situation. How far to human-level AI?
Reading: preface and introduction of Shanahan book, [McC59],
[McC99c], Optional: [McC96a].
- Logic for AI (Lifschitz's simple blocks world example). Logical
languages, interpretations, models. Intended and unintended models.
- Situation Calculus: [MH69] is the original
reference, but this year we'll follow Shanahan..
Preread: Chapter 1 and 2 of Shanahan up to p.47.
- Continuation of blocks world formalization. Frames as
objects. (not in Shanahan).
Exercises on situation calculus.
- Logical Foundations including circumscription. The rest of
chapter 2 and 3 and 4 of Shanahan.
- More on theories of action. Up to page 200 of Shanahan if there
- Philosophical issues. Semantics of ``can''. The practice of AI
requires taking sides in certain longstanding philosophical
controversies. For example, a person designing a computer program
to learn about the world has to regard the world as more than a
construct in the ``mind'' of the computer program. Otherwise, how
could he compare the beliefs the program will come to have with
facts about the world. Lots more philosophy is involved, and it
seems to me that a whole area of AI research is based on a wrong
philosophy. See [McC99a].
Preread: (1) [McC99c], (2) section
on ``can'' from [MH69], (3) [McC79a].
- Contexts as formal objects. It is a truism that the meanings of
sentences and terms depend on context. The meaning of a context is
itself contextual, and you can carry this as far as seems useful.
The innovation here is a formal theory of the relations of different
contexts and how meanings depend on context.
- Elaboration tolerance.
This is a new topic, but it relates to the continual computer
science desire for modularity. We are studying how to make logical
formalisms that allow changes in axiomatizations of phenomena
without having to start all over. Our Drosophila is the
missionary and cannibals problem. The point is to see what variants
can be made purely by adding sentences to the original statement.
- Approximate concepts and approximate theories. Many important
concepts, perhaps most, are intrinsically approximate in that they
cannot be given if-and-only-if definitions. We study how statements
involving approximate concepts can have definite truth values. The
semantics of approximate concepts may be different from what is
standard in mathematical logic. [McC00].
- First order theories of individual concepts and propositions.
We treat individual concepts and propositions as first class
objects. This lets us say more about them than can be said using
them just as terms and formulas of first order logic.
- Heuristics: We would like to have a declarative theory of
heuristics so programs can reason about them. Most likely we won't
have more than illustrative examples of heuristics and hits at
making them into objects. Reading to come, maybe.
- Formalization of knowledge
Preread: [Hal95], [McC78].
- Introspection for Robots. Like people, robots will need to
think about their own mental states, e.g. about their own
intentions. Preread: [McC96b]
Thu Sep 28 11:54:38 PDT 2000