Gödel's second incompleteness theorem [Gödel, 1965] tells us that a consistent logical theory T0 strong enough to do Peano arithmetic cannot admit a proof of its own consistency. However, if we believe the theory T0, we will believe that it is consistent. We can add the statement consis(T0) asserting that T0 is consistent to T0 getting a stronger theory T1. By the incompleteness theorem, T1 cannot admit a proof of consis(T1), and so on. Adding consistency statement for what we already believe is a self-confidence prinicple.
Alan Turing [Turing, 1939] studied iterated statements of consistency, pointing out that we can continue the iteration of self-confidence to form , which asserts that all the Tn are consistent. Moreover, the iteration can be continued through the recursive ordinal numbers. Solomon Feferman [Feferman, 1962] studied a more powerful iteration principle than Turing's called transfinite progressions of theories.
There is no single computable iterative self-confidence process that gets everything. If there were, we could put it in a single logical system, and Gödel's theorem would apply to it.
For AI purposes, T1, which is equivalent to induction up to the ordinal may suffice.
The relevance to AI of Feferman's transfinite progressions is at least to refute naive arguments based on the incompleteness theorem that AI is impossible.
A robot thinking about self-confidence principles is performing a kind of introspection. For this it needs not only the iterates of T0 but to be able to think about theories in general, i.e. to use a formalism with variables ranging over theories.