Computer Science Department
Stanford, CA 94305
JulAugSepOctNovDec , :< 10 0
JanFebMarAprMayJun JulAugSepOctNovDec , :< 10 0
I can, but I won't.
Human free will is a product of evolution and contributes to the success of the human animal. Useful robots will also require free will of a similar kind, and we will have to design it into them.
Free will is not an all-or-nothing thing. Some agents have more free will, or free will of different kinds, than others, and we will try to analyze this phenomenon. Our objectives are primarily technological, i.e. to study what aspects of free will can make robots more useful, and we will not try to console those who find determinism distressing. We distinguish between having choices and being conscious of these choices; both are important, even for robots, and consciousness of choices requires more structure in the agent than just having choices and is important for robots. Consciousness of free will is therefore not just an epiphenomenon of structure serving other purposes.
Free will does not require a very complex system. Young children and rather simple computer systems can represent internally ``I can, but I won't'' and behave accordingly.
Naturally I hope this detailed design stance [Dennett 1978] will help understand human free will. It takes the compatibilist philosophical position.
There may be some readers interested in what the paper says about human free will and who are put off by logical formulas. The formulas are not important for the arguments about human free will; they are present for people contemplating AI systems using mathematical logic. They can skip the formulas, but the coherence of what remains is not absolutely guaranteed.