A system, e.g. a robot, can be ascribed beliefs provided sentences
expressing these beliefs have the right relation to the system's
internal states, inputs and output and the goals we ascribe to it.
[Dennett, 1971] and [Dennett, 1978] calls such ascriptions the
intentional stance. The beliefs need not be explicitly
represented in the memory of the system. Also Allen Newell,
[Newell, 1980] regarded some information not represented by sentences
explicitly present in memory as nevertheless representing sentences or
propositions believed by the system. Newell called this the
logic level. I believe he did not advocate general purpose
programs that represent information primarily by
sentences.
I do.
[McCarthy, 1979a] goes into detail about conditions for ascribing belief and other mental qualities.
To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of a machine in a particular situation may require ascribing mental qualities or qualities isomorphic to them.
[McCarthy, 1979a] considers systems with very limited beliefs. For example, a thermostat may usefully be ascribed one of exactly three beliefs--that the room is too cold, that it is too warm or that its temperature is ok. This is sometimes worth doing even though the thermostat may be completely understood as a physical system.
Tom Costello pointed out to me that a simple system that doesn't use sentences can sometimes be ascribed some introspective knowledge. Namely, an electronic alarm clock getting power after being without power can be said to know that it doesn't know the time. It asks to be reset by blinking its display. The usual alarm clock can be understood just as well by the design stance as by the intentional stance. However, we can imagine an alarm clock that had an interesting strategy for getting the time after the end of a power failure. In that case, the ascription of knowledge of non-knowledge might be the best way of understanding that part of the state.