Penrose discusses awareness and understanding briefly and concludes (with no references to the AI literature) that AI researchers have no idea of how to make computer programs with these qualities.
I substantially agree with his characterizations of awareness and understanding and agree that definitions are not appropriate at the present level of understanding of these phenomena. We disagree about whether computers can have awareness and understanding.
Here's how it can be done within the framework of pure logical AI.
Pure logical AI represents all the program's knowledge and belief by sentences in a language of mathematical logic. Purity is inefficient but makes the discussion brief. ([McCarthy, 1989]) is a general discussion of logical AI and has additional references.
We distinguish a part of the robot's memory, which we will call its consciousness. Sentences have to come into consciousness before they are used in reasoning.
Reasoning involves logical deduction and also some nonmonotonic reasoning processes. The results of the reasoning re-enter consciousness. Some old sentences in consciousness get crowded out into the main memory.
Deliberate action in a pure logical robot is a consequence of the robot inferring that it should do the action. The actions include external motor and sensory actions (observations) but also mental actions such as retrieval of sentences from the general memory into consciousness.
Awareness of the program's environment is accomplished by the automatic appearance of certain class of sentences about the program's environment in the program's consciousness. These sentences often appear through actions of observation but should often result from built-in observations, e.g. noticing who comes into the room.
Besides awareness of the environment, there is also self-awareness. Self-awareness is caused by events and actions of self-observation including observations of consciousness and of the memory as a whole. The sentences expressing self-awareness also go into consciousness.
The key question about awareness in the design of logical robots concerns what kinds of sentences can and should appear in consciousness--either automatically or as the result of mental actions. Here are some examples of required mental actions.
Our approach uses Gödel's [Gödel, 1940] notion of relative consistency which allows inferring that if the theory is consistent, then a certain sentence doesn't follow. In cases of main AI interest, this can be done without the complications that Gödel had to introduce in order to prove the consistency of the continuum hypothesis. See ([McCarthy, 1995]) for a start on details.
It seems to me that the notions of awareness and understanding outlined above agree with Penrose's characterizations on p._ 37. However, his ideas about free will strike me as quite confused and not repairable. [McCarthy and Hayes, 1969] discusses free will in deterministic systems, e.g. interacting finite automata.