We shall begin by discussing how to express such facts as
*``Pat knows the combination of the safe''*, although the idea of
treating a concept as an object has application beyond the
discussion of knowledge.

We shall use the symbol *safe*1 for the safe,
and *combination*(*s*) is our
notation for the combination of an arbitrary safe *s*.
We aren't much
interested in the domain of combinations, and we shall take them
to be strings of digits with dashes in the right place, and, since
a combination is a string, we will write it in quotes.
Thus we can write

as a formalization of the English *``The combination of the safe is
45-25-17''*. Let us suppose that the combination of *safe*2 is,
co-incidentally, also 45-25-17,
so we can also write

Now we want to translate *``Pat knows the combination of
the safe''*. If we were to express it as

the inference rule that allows replacing a term by an equal term in first order logic would let us conclude knows(pat,combination(safe2)), which mightn't be true.

This problem was already recognized in 1879 by Frege,
the founder of modern predicate logic, who distinguished
between direct and indirect occurrences of expressions and would
consider the occurrence of *combination*(*safe*1) in (8) to
be indirect and not subject to replacement of equals by equals.
The modern way of stating the problem is to call *Pat knows*
a referentially opaque operator.

The way out of this difficulty currently most popular
is to treat *Pat knows* as a *modal operator*.
This involves changing the logic so that replacement of an expression
by an equal expression is not allowed in opaque contexts. Knowledge
is not the only operator that admits modal treatment. There is
also belief, wanting, and logical or physical necessity. For
AI purposes, we would need all the above modal operators and many
more in the same system. This would make the semantic discussion
of the resulting modal logic extremely complex. For this reason,
and because we want functions from material objects to concepts of
them, we have followed a different path--introducing concepts as
individual objects. This has not been popular in philosophy,
although I suppose no-one would doubt that it could be done.

Our approach is to introduce the symbol *Safe*1 as
a name for the concept of *safe*1 and the function *Combination* which
takes a concept of a safe into a concept of its combination.
The second operand of the function *knows* is now required to be a concept,
and we can write

to assert that Pat knows the combination of *safe*1. The previous trouble
is avoided so long as we can assert

which is quite reasonable, since we do not consider the concept
of the combination of *safe*1 to be the same as the concept of
the combination of *safe*2,
even if the combinations themselves are the same.

We write

and say that *safe*1 is the denotation of *Safe*1. We can say that
Pegasus doesn't exist by writing

still admitting *Pegasus* as a perfectly good concept. If we only
admit concepts with denotations (or admit
partial functions into our system), we can regard denotation as a function
from concepts to objects--including other concepts. We can then
write

The functions *combination* and *Combination* are related
in a way that we may call extensional, namely

and we can also write this relation in terms of *Combination* alone as

or, in terms of the denotation predicate,

It is precisely this property of extensionality that the above-mentioned
*knows* predicate lacks in its second argument; it is extensional in
its first argument.

Suppose we now want to say *``Pat knows that Mike knows
the combination of safe1''*. We cannot use *knows*(*mike*,*Combination*(*Safe*1))
as an operand of another *knows* function for two reasons. First,
the value of *knows*(*person*,*Concept*) is a truth value, and there are
only two truth values, so we would either have Pat knowing all true
statements or none. Second, English
treats knowledge of propositions differently from the way it treats
knowledge of the value of a term. To know a proposition is to know
that it is true, whereas the analog of knowing a combination would
be knowing whether the proposition is true.

We solve the first problem by introducing a new knowledge function

*Knows*(*Mike*,*Combination*(*Safe*1))
is not a truth value but a *proposition*, and there can be distinct
true propositions. We now need a predicate*true*(*proposition*),
so we can assert

which is equivalent to our old-style assertion

We now write

to assert that Pat knows *whether* Mike knows the combination
of safe1. We define

which forms the proposition *that* a person knows a proposition from
the truth of the proposition and that he knows whether the proposition
holds. Note that it is necessary to have new connectives to combine
propositions and that an equality sign rather than an equivalence sign
is used. As far as our first order logic is concerned, (11) is
an assertion of the equality of two terms. These matters are discussed
thoroughly in (McCarthy 1979b).

While a concept denotes at most one object, the same object
can be denoted by many concepts. Nevertheless, there are often useful
functions from objects to concepts that denote them. Numbers may
conveniently be regarded has having *standard concepts*, and an object
may have a distinguished concept relative to a particular person.
(McCarthy 1977b) illustrates the use of functions from objects to
concepts in formalizing such chestnuts as Russell's, *``I thought your
yacht was longer than it is''*.

The most immediate AI problem that requires concepts for its
successful formalism may be the relation between knowledge and ability.
We would like to connect Mike's ability to open safe1 with his knowledge
of the combination. The proper formalization of the notion of *can*
that involves knowledge rather than just physical possibility hasn't
been done yet.
Moore (1977) discusses the relation between knowledge and action from
a similar point of view, and
(McCarthy 1977b) contains some ideas about this.

There are obviously some esthetic disadvantages to a
theory that has both *mike* and *Mike*. Moreover, natural
language doesn't make such distinctions in its vocabulary,
but in rather roundabout ways when necessary.
Perhaps we could manage with just *Mike*
(the concept),
since the *denotation* function will be available for referring to *mike*
(the person himself). It makes some sentences longer, and we have
to use an equivalence relation which we may call *eqdenot* and
say ``*Mike eqdenot Brother*(*Mary*)'' rather than write
``*mike* = *brother*(*mary*)'', reserving the equality sign for
equal concepts. Since many AI programs don't make much use of
replacement of equals by equals, their notation may admit either
interpretation, i.e., the formulas may stand for either objects
or concepts.
The biggest objection is that the semantics of reasoning about
objects is more complicated if one refers to them only via concepts.

I believe that circumscription will turn out to be the key to inferring non-knowledge. Unfortunately, an adequate formalism has not yet been developed, so we can only give some ideas of why establishing non-knowledge is important for AI and how circumscription can contribute to it.

If the robot can reason that it cannot open safe1, because it doesn't know the combination, it can decide that its next task is to find the combination. However, if it has merely failed to determine the combination by reasoning, more thinking might solve the problem. If it can safely conclude that the combination cannot be determined by reasoning, it can look for the information externally.

As another example, suppose someone asks you whether the President is standing, sitting or lying down at the moment you read the paper. Normally you will answer that you don't know and will not respond to a suggestion that you think harder. You conclude that no matter how hard you think, the information isn't to be found. If you really want to know, you must look for an external source of information. How do you know you can't solve the problem? The intuitive answer is that any answer is consistent with your other knowledge. However, you certainly don't construct a model of all your beliefs to establish this. Since you undoubtedly have some contradictory beliefs somewhere, you can't construct the required models anyway.

The process has two steps. The first is deciding what knowledge is relevant. This is a conjectural process, so its outcome is not guaranteed to be correct. It might be carried out by some kind of keyword retrieval from property lists, but there should be a less arbitrary method.

The second process uses the set of ``relevant'' sentences found by the first process and constructs models or circumscription predicates that allow for both outcomes if what is to be shown unknown is a proposition. If what is to be shown unknown has many possible values like a safe combination, then something more sophisticated is necessary. A parameter called the value of the combination is introduced, and a ``model'' or circumscription predicate is found in which this parameter occurs free. We used quotes, because a one parameter family of models is found rather than a single model.

We conclude with just one example of a circumscription schema
dealing with knowledge. It is formalization of the assertion that
all Mike knows is a consequence of propositions *P*0 and *Q*0.

Wed May 15 14:19:09 PDT 1996