About the Formal Reasoning Group
The Stanford Formal Reasoning Group concentrates on Logical AI. This
is an approach to the development of symbolic AI. It was first
proposed by McCarthy in his 1959 paper, Programs with Common sense. It
uses logic to describe the manner in which intelligent machines or
people behave.
Workers in logic-based AI hope to reach human-level intelligence with
a logic-based system. Such a system, described by sentences in logic,
would represent what it knew about the world in general, what it knew
about the particular situation and what it knew about its goals, as
proposed in Programs with Common Sense. Other data structures,
e.g. for representing pictures, would be present together with
programs for creating them, manipulating them and for describing them
in logic. The holistic program would perform actions that it inferred
were appropriate for achieving its goals.
Making a logic-based human-level program requires enough progress
on at least the following problems:
Extensions of mathematical logic
-
Besides nonmonotonic reasoning, other problems in the logic of AI are
beginning to take a definite form, including formalization of context.
-
Context
-
We work on formalizing contexts as first class objects. The
basic relation is ist(c,p). It asserts that the proposition
p is true in the context c. The most important formulas
relate the propositions true in different contexts. Introducing
contexts as formal objects will permit axiomatizations in limited
contexts to be expanded to transcend the original limitations.
This seems necessary to provide AI programs using logic with certain
capabilities that human fact representation and human reasoning
possess. Fully implementing transcendence seems to require
further extensions to mathematical logic, i.e. beyond the nonmonotonic
inference methods first invented in AI and now studied
as a new domain of logic.
We organized a
symposium
on the formalization of context as a
part of the AAAI 1995 Fall Symposium Series.
We also have a number of
papers
on this topic.
-
Elaboration tolerance
-
The program must be able to take new facts into account to get a
different solution than was suggested by the old facts. Formalized
nonmonotonic reasoning was the first step toward achieving this.
-
Concurrent events
-
Gelfond, et. al. have treated this recently using the situation
calculus, and we have some recent and still unpublished results aimed
at a simpler treatment.
-
Intentionality
-
The treatment of mental objects such as beliefs (much discussed in the
philosophical literature) and the corresponding term concept,
e.g. ``what he thinks electrons are'' (hardly at all discussed in the
formal literature).
-
Reification
-
We need a better understanding of what are to be considered objects in
the logic. For example, a full treatment of the "missionaries and
cannibals" problem together with reasonable elaborations must allow us
to say, ``There are just two things wrong with the boat.''
-
Introspection and transcendence
-
Human intelligence has the ability to survey, in some sense, the whole
of its activity, and to consider any assumption, however built-in, as
subject to question. Humans aren't really very good at this, and it
is only needed for some very high level problems. Nevertheless, we
want it, and there are some ideas about how to get it.
Unfortunately, too many people concentrated on self-referential
sentences. It's a cute subject, but not relevant to human introspection
or to the kinds of introspection we will have to make computers
do.
-
Levels of description
-
If one is asked how an event occurred, one can often answer by giving
a sequence of lower level events that answer the question for the
particular occurrence. Once I bought some stamps by going to the
stamp selling machine in the airport and putting in six dollars, etc.
Each of these subevents has a how, but I didn't plan them, and cannot
recall them. A stamp buying coach would have analyzed them to a lower
level than I could and would be able to teach me how to buy stamps
more effectively. For AI, we therefore, need a more flexible notion
than the computer science theories of how programs are built up from
elementary operations.
These problems typify the work the Formal Reasoning Group does. The
group consists of Prof. John McCarthy, Tom Costello, Eyal Amir and
Sasa Buvac. Sasa, Eyal and Tom are all around and their offices are
Gates 216 and 218. Johns office is GATES 208.