From the AI point of view, consciousness must be regarded as a collection of interacting processes rather than the unitary object of much philosophical speculation. We ask what kinds of propositions and other entities need to be designed for consciousness to be useful to an animal or a machine. We thereby assert that human consciousness is useful to human functioning and not just and epiphenomenon. Zombies in the sense of Todd Moody's article are merely the victims of Moody's prejudices. To behave like humans, zombies will need what Moody might call pseudo-consciousness, but useful pseudo-consciousness will share all the observable qualities of human consciousness including what the zombie will be able to report. Robots will require a pseudo-consciousness with many of the intellectual qualities of human consciousness but will function successfully with few if any human emotional conscious qualities if that is how we choose to build them.
Such is an AI doctrine on the subject. We now must ask what are the specific processes that make up the consciousness necessary for successful robots and the additional processes required should we want them to imitate humans. Many aspects of intelligent behavior do not require anything like a human level of consciousness, and hardly any AI systems built so far have any. For this reason the following remarks are somewhat speculative and are more stimulated by people like the Dreyfuses and Penrose who deny the possibility of robot consciousness than by any features of existing programs.
We regard consciousness as a subset of the memory of an animal or machine distinguished by the fact that many processes involve only those elements of memory that are in consciousness. The elements of memory include propositions (like sentences) and other entities We may divide our consideration into basic consciousness and consciousness of self.