An intelligent program will have to use counterfactual conditional sentences, but AI needs to concentrate on useful counterfactuals. An example is ``If another car had come over the hill when you passed just now, there would have been a head-on collision.'' Believing this counterfactual might change one's driving habits, whereas the corresponding material conditional, obviously true in view of the false antecedent, could have no such effect. Counterfactuals permit systems to learn from experiences they don't actually have.
Unfortunately, the Stalnaker-Lewis closest possible world model of counterfactuals doesn't seem helpful in building programs that can formulate and use them.