next up previous
Next: Three Approaches to Knowledge Up: ARTIFICIAL INTELLIGENCELOGIC AND Previous: Some Formalizations and their

Ability, Practical Reason and Free Will

 

An AI system capable of achieving goals in the common-sense world will have to reason about what it and other actors can and cannot do. For concreteness, consider a robot that must act in the same world as people and perform tasks that people give it. Its need to reason about its abilities puts the traditional philosophical problem of free will in the following form. What view shall we build into the robot about its own abilities, i.e. how shall we make it reason about what it can and cannot do? (Wishing to avoid begging any questions, by reason we mean compute using axioms, observation sentences, rules of inference and nonmonotonic rules of conjecture.)

Let A be a task we want the robot to perform, and let B and C be alternate intermediate goals either of which would allow the accomplishment of A. We want the robot to be able to choose between attempting B and attempting C. It would be silly to program it to reason: ``I'm a robot and a deterministic device. Therefore, I have no choice between B and C. What I will do is determined by my construction.'' Instead it must decide in some way which of B and C it can accomplish. It should be able to conclude in some cases that it can accomplish B and not C, and therefore it should take B as a subgoal on the way to achieving A. In other cases it should conclude that it can accomplish either B or C and should choose whichever is evaluated as better according to the criteria we provide it.

(McCarthy and Hayes 1969) proposes conditions on the semantics of any formalism within which the robot should reason. The essential idea is that what the robot can do is determined by the place the robot occupies in the world--not by its internal structure. For example, if a certain sequence of outputs from the robot will achieve B, then we conclude or it concludes that the robot can achieve B without reasoning about whether the robot will actually produce that sequence of outputs.

Our contention is that this is approximately how any system, whether human or robot, must reason about its ability to achieve goals. The basic formalism will be the same, regardless of whether the system is reasoning about its own abilities or about those of other systems including people.

The above-mentioned paper also discusses the complexities that come up when a strategy is required to achieve the goal and when internal inhibitions or lack of knowledge have to be taken into account.


next up previous
Next: Three Approaches to Knowledge Up: ARTIFICIAL INTELLIGENCELOGIC AND Previous: Some Formalizations and their

John McCarthy
Mon Jun 26 17:50:09 PDT 2000