next up previous
Next: The Philosophy of Artificial Up: What has AI in Previous: What has AI in

Introduction

 

Artificial intelligence and philosophy have more in common than a science usually has with the philosophy of that science. This is because human level artificial intelligence requires equipping a computer program with some philosophical attitudes, especially epistemological.

The program must have built into it a concept of what knowledge is and how it is obtained.

If the program is to reason about what it can and cannot do, its designers will need an attitude to free will. If it is to do meta-level reasoning about what it can do, it needs an attitude of its own to free will.

If the program is to be protected from performing unethical actions, its designers will have to build in an attitude about that.

Unfortunately, in none of these areas is there any philosophical attitude or system sufficiently well defined to provide the basis of a usable computer program.

Most AI work today does not require any philosophy, because the system being developed doesn't have to operate independently in the world and have a view of the world. The designer of the program does the philosophy in advance and builds a restricted representation into the program.

Building a chess program requires no philosophy, and Mycin recommended treatments for bacterial infections without even having a notion of processes taking place in time. However, the performance of Mycin-like programs and chess programs is limited by their lack of common sense and philosophy, and many applications will require a lot. For example, robots that do what they think their owners want will have to reason about wants.

Not all philosophical positions are compatible with what has to be built into intelligent programs. Here are some of the philosophical attitudes that seem to me to be required.

  1. Science and common sense knowledge of the world must both be accepted. There are atoms, and there are chairs. We can learn features of the world at the intermediate size level on which humans operate without having to understand fundamental physics. Causal relations must also be used for a robot to reason about the consequences of its possible actions.
  2. Mind has to be understood a feature at a time. There are systems with only a few beliefs and no belief that they have beliefs. Other systems will do extensive introspection. Contrast this with the attitude that unless a system has a whole raft of features it isn't a mind and therefore it can't have beliefs.
  3. Beliefs and intentions are objects that can be formally described.
  4. A sufficient reason to ascribe a mental quality is that it accounts for behavior to a sufficient degree.
  5. It is legitimate to use approximate concepts not capable of iff definition. For this it is necessary to relax some of the criteria for a concept to be meaningful. It is still possible to use mathematical logic to express approximate concepts.
  6. Because a theory of approximate concepts and approximate theories is not available, philosophical attempts to be precise have often led to useless hair splitting.
  7. Free will and determinism are compatible. The deterministic process that determines what an agent will do involves its evaluation of the consequences of the available choices. These choices are present in its consciousness and can give rise to sentences about them as they are observed.
  8. Self-consciousness consists in putting sentences about consciousness in memory.
  9. Twentieth century philosophers became to critical of reification. Many of the criticism don't apply when the entities reified are treated as approximate concepts.


next up previous
Next: The Philosophy of Artificial Up: What has AI in Previous: What has AI in

John McCarthy
Tue Apr 23 22:16:53 PDT 1996