next up previous
Next: References Up: SOME EXPERT SYSTEM Previous: COMMON SENSE KNOWLEDGE

COMMON SENSE REASONING

Our ability to use common sense knowledge depends on being able to do common sense reasoning.

Much artificial intelligence inference is not designed to use directly the rules of inference of any of the well known systems of mathematical logic. There is often no clear separation in the program between determining what inferences are correct and the strategy for finding the inferences required to solve the problem at hand. Nevertheless, the logical system usually corresponds to a subset of first order logic. Systems provide for inferring a fact about one or two particular objects from other facts about these objects and a general rule containing variables. Most expert systems, including MYCIN, never infer general statements, i.e. quantified formulas.

Human reasoning also involves obtaining facts by observation of the world, and computer programs also do this. Robert Filman did an interesting thesis on observation in a chess world where many facts that could be obtained by deduction are in fact obtained by observation. MYCIN's doesn't require this, but our hypothetical robot physician would have to draw conclusions from a patient's appearance, and computer vision is not ready for it.

An important new development in AI (since the middle 1970s) is the formalization of nonmonotonic reasoning.

Deductive reasoning in mathematical logic has the following property -- called monotonicity by analogy with similar mathematical concepts. Suppose we have a set of assumptions from which follow certain conclusions. Now suppose we add additional assumptions. There may be some new conclusions, but every sentence that was a deductive consequence of the original hypotheses is still a consequence of the enlarged set.

Ordinary human reasoning does not share this monotonicity property. If you know that I have a car, you may conclude that it is a good idea to ask me for a ride. If you then learn that my car is being fixed (which does not contradict what you knew before), you no longer conclude that you can get a ride. If you now learn that the car will be out in half an hour you reverse yourself again.

Several artificial intelligence researchers, for example Marvin Minsky (1974) have pointed out that intelligent computer programs will have to reason nonmonotonically. Some concluded that therefore logic is not an appropriate formalism.

However, it has turned out that deduction in mathematical logic can be supplemented by additional modes of nonmonotonic reasoning, which are just as formal as deduction and just as susceptible to mathematical study and computer implementation. Formalized nonmonotonic reasoning turns out to give certain rules of conjecture rather than rules of inference -- their conclusion are appropriate, but may be disconfirmed when more facts are obtained. One such method is circumscription, described in (McCarthy 1980).

A mathematical description of circumscription is beyond the scope of this lecture, but the general idea is straightforward. We have a property applicable to objects or a relation applicable to pairs or triplets, etc. of objects. This property or relation is constrained by some sentences taken as assumptions, but there is still some freedom left. Circumscription further constrains the property or relation by requiring it to be true of a minimal set of objects.

As an example, consider representing the facts about whether an object can fly in a database of common sense knowledge. We could try to provide axioms that will determine whether each kind of object can fly, but this would make the database very large. Circumscription allows us to express the assumption that only those objects can fly for which there is a positive statement about it. Thus there will be positive statements that birds and airplanes can fly and no statement that camels can fly. Since we don't include negative statements in the database, we could provide for flying camels, if there were any, by adding statements without removing existing statements. This much is often done by a simpler method -- the closed world assumption discussed by Raymond Reiter. However, we also have exceptions to the general statement that birds can fly. For example, penguins, ostriches and birds with certain feathers removed can't fly. Moreover, more exceptions may be found and even exceptions to the exceptions. Circumscription allows us to make the known exceptions and to provide for additional exceptions to be added later -- again without changing existing statements.

Nonmonotonic reasoning also seems to be involved in human communication. Suppose I hire you to build me a bird cage, and you build it without a top, and I refuse to pay on the grounds that my bird might fly away. A judge will side with me. On the other hand suppose you build it with a top, and I refuse to pay full price on the grounds that my bird is a penguin, and the top is a waste. Unless I told you that my bird couldn't fly, the judge will side with you. We can therefore regard it as a communication convention that if a bird can fly the fact need not be mentioned, but if the bird can't fly and it is relevant, then the fact must be mentioned.

Davis, Randall; Buchanan, Bruce; and Shortliffe, Edward (1977). Production Rules as a Representation for a Knowledge-Based Consultation Program, Artificial Intelligence, Volume 8, Number 1, February.

McCarthy, John (1960). Programs with Common Sense, Proceedings of the Teddington Conference on the Mechanization of Thought Processes, London: Her Majesty's Stationery Office. (Reprinted in this volume, pp. 000-000).

McCarthy, John and Patrick Hayes (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence, in B. Meltzer and D. Michie (eds), Machine Intelligence 4, Edinburgh University. (Reprinted in B. L. Webber and N. J. Nilsson (eds.), Readings in Artificial Intelligence, Tioga, 1981, pp. 431-450; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic Reasoning, Morgan Kaufmann, 1987, pp. 26-45; also in this volume, pp. 000-000.)

McCarthy, John (1980). Circumscription -- A Form of Nonmonotonic Reasoning, Artificial Intelligence, Volume 13, Numbers 1,2. (Reprinted in B. L. Webber and N. J. Nilsson (eds.), Readings in Artificial Intelligence, Tioga, 1981, pp. 466-472; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic Reasoning, Morgan Kaufmann, 1987, pp. 145-152; also in this volume, pp. 000-000.)

Minsky, Marvin (1974). A Framework for Representing Knowledge, M.I.T. AI Memo 252.

Shortliffe, Edward H. (1976). Computer-Based Medical Consultations: MYCIN, American Elsevier, New York, NY.

QUESTION: You said the programs need common sense, but that's like saying, If I could fly I wouldn't have to pay Eastern Airliness tex2html_wrap_inline53 44 to haul me up here from Washington. So if the programs indeed need common sense, how do we go about it? Isn't that the point of the argument?

DR. MCCARTHY: I could have made this a defensive talk about artificial intelligence, but I chose to emphasize the problems that have been identified rather than the progress that has been made in solving them. Let me remind you that I have argued that the need for common sense is not a truism. Many useful things can be done without it, e.g. MYCIN and also chess programs.

QUESTION: There seemed to be a strong element in your talk about common sense, and even humans developing it, emphasizing an experiential component -- particularly when you were giving your example of dropping a glass of water. I'm wondering whether the development of these programs is going to take similar amounts of time. Are you going to have to have them go through the sets of experiences and be evaluated? Is there work going on in terms of speeding up the process or is it going to take 20 years for a program from the time you've put in its initial state to work up to where it has a decent amount of common sense?

DR. MCCARTHY: Consider your 20 years. If anyone had known in 1963 how to make a program learn from its experience to do what a human does after 20 years, they might have done it, and it might be pretty smart by now. Already in 1958 there had been work on programs that learn from experience. However, all they could learn was to set optimal values of numerical parameters in the program, and they were quite limited in their ability to do that. Arthur Samuel's checker program learned optimal values for its parameters, but the problem was that certain kinds of desired behavior did not correspond to any setting of the parameters, because it depended on the recognition of a certain kind of strategic situation. Thus the first prerequisite for a program to be able to learn something is that it be able to represent internally the desired modification of behavior. Simple changes in behavior must have simple representations. Turing's universality theory convinces us that arbitrary behaviors can be represented, but they don't tell us how to represent them in such a way that a small change in behavior is a small change in representation. Present methods of changing programs amount to education by brain surgery.

QUESTION: I would ask you a question about programs needing common sense in a slightly different way, and I want to use the MYCIN program as an example.

There are three actors there -- the program, the physician, and the patient. Taking as a criterion the safety of the patient, I submit that you need at least two of these three actors to have common sense.

For example if (and sometimes this is the case) one only were sufficient, it would have to be the patient because if the program didn't use common sense and the physician didn't use common sense, the patient would have to have common sense and just leave. But usually, if the program had common sense built in and the physician had common sense but the patient didn't, it really might not matter because the patient would do what he or she wants to do anyway.

Let me take another possibility. If only the program has common sense and neither the physician nor the patient has common sense, then in the long run the program also will not use the common sense. What I want to say is that these issues of common sense must be looked at in this kind of frame of reference.

DR. MCCARTHY: In the use of MYCIN, the physician is supposed to supply the common sense. The question is whether the program must also have common sense, and I would say that the answer is not clear in the MYCIN case. Purely computational programs don't require common sense, and none of the present chess programs have any. On the other hand, it seems clear that many other kinds of programs require common sense to be useful at all.


next up previous
Next: References Up: SOME EXPERT SYSTEM Previous: COMMON SENSE KNOWLEDGE

John McCarthy
Sun May 12 13:27:44 PDT 1996