In so far as our knowledge of the world is incomplete, new sentences can tell us more about the world. Every counterfactual we are told gives us more information about how the world would be, if things were only slightly different, relative to some unstated approximate theory. This information can later be used if we find ourselves in a situation with only a small number of differences between it and the present, so that the approximate theory is applicable to both. The counterfactual
``If there had been one more book in that box you would not have been able to lift it.''tell us that in future situations, that satisfy the unstated theory the speaker considers, boxes with more books in them will be too heavy to lift. This differs from the learning we considered earlier, as this is inferring a universal from a counterfactual, rather than using the counterfactual as an instance in a learning algorithm.
If the approximate theory is unstated, to use this counterfactual we need to infer what theory was used. A natural default to use here is to assume that the speaker is using the same theory that you find appropriate to describe the situation.
We can apply counterfactuals in other situations because the theories on which they are based are approximate. The truth of the counterfactual only depends on certain features of the situation, and when these features re-occur, the same inference may be made. In a later section we give an example of where we can derive new facts, that do not mention counterfactuals, from a counterfactual. In our skiing domain, we show that we can derive a fact about the world (that a certain slope is a turn), from the truth of a counterfactual, (``if he had put his weight on his downhill ski, he would not have fallen'').
Counterfactuals are useful for other purposes in AI. Ginsberg [Ginsberg, 1986] suggests that they are useful for planning. They also are closely related to the notion of causality, as discussed in [Pearl, 1988] [Geffner, 1992],[Pearl, 2000].