next up previous
Next: References


John McCarthy
Computer Science Department
Stanford University
Stanford, CA 94305

JanFebMarAprMayJun JulAugSepOctNovDec , :< 10 0


This article is oriented toward the use of modality in artificial intelligence (AI). An agent must reason about what it or other agents know, believe, want, intend or owe. Referentially opaque modalities are needed and must be formalized correctly. Unfortunately, modal logics seem too limited for many important purposes. This article contains examples of uses of modality for which modal logic seems inadequate.

I have no proof that modal logic is inadequate, so I hope modal logicians will take the examples as challenges.

Maybe this article will also have philosophical and mathematical logical interest.

Here are the main considerations.

Many modalities:
Natural language often uses several modalities in a single sentence, ``I want him to believe that I know he has lied.'' [Gab96] introduces a formalism for combining modalities, but I don't know whether it can handle the examples mentioned in this article.

New Modalities:
Human practice sometimes introduces new modalities on an ad hoc basis. The institution of owing money or the obligations the Bill of Rights imposes on the U.S. Government are not matters of basic logic. Introducing new modalities should involve no more fuss than introducing a new predicate. In particular, human-level AI requires that programs be able to introduce modalities when this is appropriate, e.g. have function taking modalities as values.

Knowing what:
``Pat knows Mike's telephone number'' is a simple example. In [McC79b], this is formalized as
where pat stands for the person Pat, Mike stands for a standard concept of the person Mike and Telephone takes a concept of a person into a concept of his telephone number. We might have
expressing the fact that Mike and Mary have the same telephone, but we won't have
which would assert that the concept of Mike's telephone number is the same as that of Mary's telephone number. This permits us to have
even though Pat knows Mike's telephone number which happens to be the same as Mary's. The theory in [McC79b] also includes functions from some kinds of things, e.g. numbers or people, to standard concepts of them. This permits saying that Kepler did not know that the number of planets is composite while saying that Kepler knew that the number we know to be the number of planets (9) is composite.

The point of this example is not mainly to advertise [McC79b] but to advocate that a theory of knowledge must treat knowing what as well as knowing that and to illustrate some of the capabilities needed for adequately using knowing what.

could be avoided by writing
but the required ``quantifying in'' is likely to be a nuisance.

Proving Non-knowledge
[McC78] formalizes two puzzles whose solution requires inferring non-knowledge from previously asserted non-knowledge and from limiting what is learned when a person hears some information.gif

[McC78] uses a variant of the Kripke accessibility relation, but here it is used directly in first order logic rather than to give semantics to a modal logic. The relation is A(w1,w2, person, time) interpreted as asserting that in world w1, it is possible for person that the world is w2. Non-knowledge of a term in w1 is e.g. the color of a spot or the value of a numerical variable, is expressed by saying that there is a world w2 in which the value of the term differs from its value in w1.

[Lev90] uses a modality whose interpretation is ``all I know is tex2html_wrap_inline122.''. He uses autoepistemic logic [Moo85], a nonmonotonic modal logic. This seems inadequate in general, because we need to be able to express ``All I know about the value of x is tex2html_wrap_inline122.'' gif Here's an example. At one stage in Mr. S and Mr. P, we can say that all Mr. P knows about the value of the pair is their product and the fact that their sum is not the sum of two primes.

[KPH91] treats the question of showing how President Bush could reason that he didn't know whether Gorbachev was standing or sitting and how Bush could also reason that Gorbachev didn't know whether Bush was standing or sitting. The treatment does not use modal logic but rather a variant of circumscription called autocircumscription proposed by Perlis [Per88].

Joint knowledge and learning
In the wise men problem, they learn at each stage that the others don't know the colors of their spots, and in Mr. S and Mr. P they learn what the others have said. In each case the learning is joint knowledge, wherein several people knowing something jointly implies not only that each knows it but also that they know it jointly. [McC78] treats joint knowledge by introducing pseudo-persons for each subset of the real knowers. The pseudo-person knows what the subset knows jointly. The logical treatment of joint knowledge in [McC78] makes the joint knowers S5 in their knowledge. I don't know whether a more subtle axiomatization would avoid this.

[McC78] treats learning a fact by using the time argument of the accessibility relation. After person learns a fact p the worlds that are possible for him are those worlds that were previously possible for him and in which p holds. Learning the value of a term is treated similarly.

Other modalities
[McC79a] treats believing and intending and [McC96] treats introspection by robots. Neither paper introduces enough formalism to provide a direct challenge to modal logic, but it seems to me that the problems are even harder than those previously treated.

Acknowledgements: This work was supported in part by DARPA (ONR) grant N00014-94-1-0775. Tom Costello provided some useful discussion.

next up previous
Next: References

John McCarthy
Tue Mar 18 18:25:02 PDT 1997