Due to the late date, this homework assignment is much shorter than the previous two.
Problem I (Rostand, 1897): (40 points)
Represent the following sentences
a) in a modal logic
b) in a syntactic logic
Note that for most sentences, you will need to have 2 modal operators (for part a) and 2 syntactic predicates (for part b) to represent knowledge and belief. In addition, you will need to have a modal operator (for part a) and syntactic predicate (for part b) to represent want in sentence 7. For sentence 8, you will need to represent time explicitly. Finally, you can assume a 2-place predicate Writes-poetry(x,y) which is true if x writes poetry for y. (The concept is obviously crying out for a better representation, but formalizing the concept of a sonnet or of verse in formal logic is beyond the scope of this course.)
Problem II (Fagin, Halpern, Moses, and Vardi, 1995): (30 points)
Consider the wise man puzzle, a variant of the muddy children puzzle. There are 3 wise men. It is common knowledge that there are 3 red hats and 2 white hats. Everyone can see the hats on everyone else's head, but not on his or her own head. The king puts a hat on each of the wise men, and asks each wise man (sequentially) if he knows the color of the hat on his head. The first answers no; the second answers no; the third answers that he knows the color of his hat.
a) Formalize this puzzle using a Kripke knowledge structure. Show how the knowledge structure changes after each answer.
b) What is the color of the third wise man's hat? Prove your answer.
c) If the third wise man is blind, but it is common knowledge that the first two can see, can the third wise man still come to know the color of his hat?
For an example of how one sets up one of these knowledge puzzles in a Kripkean structure, and to see what sort of proof is expected, see an example solution to the muddy children problem .
Problem III (Lifschitz, 1986): (30 points)
Although the Yale Shooting Problem is most commonly attributed to
Steve Hanks and Drew McDermott, the underlying problem ---
the problem of multiple extensions in a nonmonotonic theory
of temporal reasoning --- was also independently
discovered by Vladimir Lifschitz in the context of the blocks world.
Lifschitz formalized causal rules by stating that unless an action
was abnormal in some respect, it had a particular effect.
For example, one way to say that pulling on the handle of a door typically
results in the door being open would be:
~ab(pull-handle(door),s) ==> Holds(results(s,pull-handle(door)), open(door))
One could then specify the conditions in which the action was abnormal (didn't have the expected effect). For example, one might say that pulling a door handle was abnormal (didn't have the expected effect) in any situation in which the door was locked:
Holds(s,locked(door)) ==> ab(pull-handle(door),s)
In a 1986 paper, Lifschitz showed that this approach would not give the expected results. He formalized the following 2 principles of a blocks-world theory:
He then considered what happens if one tries to move block A to the top of block B, and then tries to move block B to the top of block A. He showed that he came up with 2 extensions, one of which was unexpected.
Your task is to show in detail the problem that Lifschitz discovered. To do this, you must:
You should do this problem both in default logic and in circumscription.
1. for default logic, you will need a default rule of the following form:
:~ab(event,s) / ~ab(event,s))
2. for circumscription, you will be circumscribing the predicate ab.
In circumscription, you should get (at least) 2 (classes of) minimal models; in default logic, you should get (at least) 2 extensions. Which is the intuitive minimal model/ extension?
More major hints:
You are given a sufficient condition for the Move action being abnormal, but not necessary conditions. So in some models, the Move action may be abnormal even if there's no "reason" for it to be abnormal. You might initially think that these models won't be minimal so there's no point enumerating them. Don't fall into this trap. Enumerate these models. Then see what happens in these models when you try to do the next action. Also spell out what happens in the models that are intuitively preferred.
Back to web page for KR course