Of course, logical AI involves using actual sentences in the memory of the machine.
To use hopes in this way requires the self observation to remember what it hoped for.
Sometimes a robot must also infer that other robots or people hope or did hope for certain things.
is a proposition,
then
is the proposition that the robot hopes for
to become true. In mental situation calculus we would write
to assert that in mental situation
, the robot hopes for
.
Human hopes have certain qualities that I can't decide whether we will want. Hope automatically brings into consciousness thoughts related to what a situation realizing the hope would be like. We could design our programs to do the same, but this is more automatic in the human case than might be optimal. Wishful thinking is a well-known human malfunction.