[McCarthy, 1959] proposed programs with common sense that represent what they know about particular situations and the world in general primarily by sentences in some language of mathematical logic. They decide what to do primarily by logical reasoning, i.e. when a logical AI program does an important action, it is usually because it inferred a sentence saying it should. There will usually be other data structures and programs, and they may be very important computationally, but the main decisions of what do are made by logical reasoning from sentences explicitly present in the robot's memory. Some of the sentences may get into memory by processes that run independently of the robot's decisions, e.g. facts obtained by vision. Developments in logical AI include situation calculus in various forms, logical learning, nonmonotonic reasoning in various forms ([McCarthy, 1980], [McCarthy, 1986], [Brewka, 1991], [Lifschitz, 1994]), theories of concepts as objects [McCarthy, 1979b] and theories of contexts as objects [McCarthy, 1993], [McCarthy and Buvac, 1998]. [McCarthy, 1959] mentioned self-observation but wasn't specific.
There have been many programs that decide what do by logical reasoning with logical sentences. However, I don't know of any that are conscious of their own ongoing mental processes, i.e. bring sentences about the sentences generated by these processes into memory along with them. We hope to establish in this article that some consciousness of their own mental processes will be required for robots to reach a level intelligence needed to do many of the tasks humans will want to give them. In our view, consciousness of self, i.e. introspection, is essential for human level intelligence and not a mere epiphenomenon. However, we need to distinguish which aspects of human consciousness need to be modelled, which human qualities need not and where AI systems can go beyond human consciousness.