There is a large difference between the human mind and the ape mind, and human intelligence evolved from ape-like intelligence in a short time as evolution goes. Our conjecture is that besides the larger brain, there is one qualitative difference--consciousness. The evolutionary step consisted of making more of the brain state itself observable than was possible for our ape-like ancestors. The consequence was that we could learn procedures that take into account the state of the brain, e.g. previous observations, knowledge or lack of it, etc.
The consequence for AI is that maybe introspection can be introduced into problem solving in a rather simple way--letting actions depend on the state of the mind and not just on the state of the external world as revealed by observation.
This suggests designing logical robots with observation as a subconscious process, i.e. mainly taking place in the background rather than as a result of decisions. Observation results in sentences in consciousness. Deliberate observations should also be possible. The mental state would then be one aspect of the world that is subconsciously observed.
We propose to use contexts as formal objects for robot context, whereas context is mainly subconscious in humans. Perhaps robots should also deal with contexts at least partly subconsciously. I'd bet against it now.
[Much more to come when I get it clear.]
2002 July: It's still not sufficiently clear.