Computer Science Department
Stanford, CA 94305
JulAugSepOctNovDec , :< 10 0
JanFebMarAprMayJun JulAugSepOctNovDec , :< 10 0
This article is oriented toward the use of modality in artificial intelligence (AI). An agent must reason about what it or other agents know, believe, want, intend or owe. Referentially opaque modalities are needed and must be formalized correctly. Unfortunately, modal logics seem too limited for many important purposes. This article contains examples of uses of modality for which modal logic seems inadequate.
I have no proof that modal logic is inadequate, so I hope modal logicians will take the examples as challenges.
Maybe this article will also have philosophical and mathematical logical interest.
Here are the main considerations.
The point of this example is not mainly to advertise [McC79b] but to advocate that a theory of knowledge must treat knowing what as well as knowing that and to illustrate some of the capabilities needed for adequately using knowing what.
could be avoided by writing
but the required ``quantifying in'' is likely to be a nuisance.
[McC78] uses a variant of the Kripke accessibility relation, but here it is used directly in first order logic rather than to give semantics to a modal logic. The relation is A(w1,w2, person, time) interpreted as asserting that in world w1, it is possible for person that the world is w2. Non-knowledge of a term in w1 is e.g. the color of a spot or the value of a numerical variable, is expressed by saying that there is a world w2 in which the value of the term differs from its value in w1.
[Lev90] uses a modality whose interpretation is ``all I know is .''. He uses autoepistemic logic [Moo85], a nonmonotonic modal logic. This seems inadequate in general, because we need to be able to express ``All I know about the value of x is .'' Here's an example. At one stage in Mr. S and Mr. P, we can say that all Mr. P knows about the value of the pair is their product and the fact that their sum is not the sum of two primes.
[KPH91] treats the question of showing how President Bush could reason that he didn't know whether Gorbachev was standing or sitting and how Bush could also reason that Gorbachev didn't know whether Bush was standing or sitting. The treatment does not use modal logic but rather a variant of circumscription called autocircumscription proposed by Perlis [Per88].
[McC78] treats learning a fact by using the time argument of the accessibility relation. After person learns a fact p the worlds that are possible for him are those worlds that were previously possible for him and in which p holds. Learning the value of a term is treated similarly.
Acknowledgements: This work was supported in part by DARPA (ONR) grant N00014-94-1-0775. Tom Costello provided some useful discussion.