Suppose we have the formula . We can then
enter the context c and infer the formula
.
Conversely, if we have the formula
we can infer
by exiting the context c. We don't always
want to be explicit about the sequence of all the contexts that were
entered, but the logic needs to be such that the system always exits
into the context it was in before entering. The enter and exit
operations can be thought of as the push and pop operations on a
stack. In the logic presented in [12] the
sequence of contexts that has been entered is always explicitly
stated.
We can
regard as analogous to
, and the operation of
entering c as analogous to assuming c in a system of
natural deduction as invented by Gentzen and described in many
logic texts. Indeed a context is a generalization of a
collection of assumptions, but there are important differences. For
example, contexts contain linguistic assumptions as well as
declarative and a context may correspond to an infinite and only
partially known collection of assumptions. Moreover, because
relations among contexts are expressed as sentences in the language,
allows inferences within the language that could only be
done at the meta-level of the usual natural deduction systems.
There are various ways of handling the reasoning step of
entering a context. The way most analogous to the usual natural
deduction systems is to have an operation . Having done
this, one could then write any p for which one already had
. However, it seems more convenient in an interactive
theorem proving to use the style of Jussi Ketonen's EKL
interactive theorem prover [35]. In
the style of that system, if one had
, one could immediately
write p, and the system would keep track of the dependence on c.
To avoid ambiguity as to where an occurrence of
came from,
one might have to refer to a line number in the derivation. Having
obtained p by entering c and then inferring some sentence q, one
can leave c and get
. In natural deduction, this
would be called discharging the assumption c.
Human natural language risks ambiguity by not always specifying such assumptions, relying on the hearer or reader to guess what contexts makes sense. The hearer employs a principle of charity and chooses an interpretation that assumes the speaker is making sense. In AI usage we probably don't usually want computers to make assertions that depend on principles of charity for their interpretation.
We are presently doubtful that the reasoning we will want our programs to do on their own will correspond closely to using an interactive theorem prover. Therefore, it isn't clear whether the above ideas for implementing entering and leaving contexts will be what we want.
Sentences of the form can themselves be true in contexts, e.g. we
can have
. In this draft, we will ignore the fact that
if we want to stay in first order logic, we should reify assertions and write
something like
, where
is
a term rather than a wff. Actually the same problem arises for p itself; the
occurrence of p in
might have to be syntactically distinct from
the occurrence of p standing by itself. Alternatively to reifying assertions
we could use a modal logic; this approach is investigated in
[52, 9, 4, 59].