Gödel's second incompleteness theorem [Gödel, 1965] tells us that
a consistent logical theory *T*0 strong enough to do Peano arithmetic
cannot admit a proof of its own consistency. However, if we believe
the theory *T*0, we will believe that it is consistent. We can add
the statement *consis*(*T*0) asserting that *T*0 is consistent to *T*0
getting a stronger theory *T*1. By the incompleteness theorem, *T*1
cannot admit a proof of *consis*(*T*1), and so on. Adding consistency
statement for what we already believe is a *self-confidence
prinicple.*

Alan Turing [Turing, 1939] studied iterated statements of
consistency, pointing out that we can continue the iteration of
self-confidence to form , which asserts that all the *Tn* are
consistent. Moreover, the iteration can be continued through the
*recursive ordinal numbers*. Solomon Feferman [Feferman, 1962]
studied a more powerful iteration principle than Turing's called
*transfinite progressions of theories*.

There is no single computable iterative self-confidence process that gets everything. If there were, we could put it in a single logical system, and Gödel's theorem would apply to it.

For AI purposes, *T*1, which is equivalent to induction up to the
ordinal may suffice.

The relevance to AI of Feferman's transfinite progressions is at least to refute naive arguments based on the incompleteness theorem that AI is impossible.

A robot thinking about self-confidence principles is performing a kind
of introspection. For this it needs not only the iterates of *T*0 but
to be able to think about theories in general, i.e. to use a formalism
with variables ranging over theories.

Mon Jul 15 13:06:22 PDT 2002