There are different kinds and levels of free will. An automobile has none, a chess program has a minimal kind of free will, and a human has a lot. Human-level AI systems, i.e. those that match or exceed human intelligence will need a lot more than present chess programs, and most likely will need almost as much as a human possesses, even to be useful servants.
Consider chess programs. What kinds of free will do they have and can they have? A usual chess program, given a position, generates a list of moves available in the position. It then goes down the list and tries the moves successively getting a score for each move. It chooses the move with the highest score (or perhaps the first move considered good enough to achieve a certain objective.)
That the program considers alternatives is our reason for ascribing to it a little free will, whereas we ascribe none to the automobile. How is the chess program's free will limited, and what more could we ask? Could further free will help make it a more effective program?
A human doesn't usually consider his choices sequentially, scoring each and comparing only the scores. The human compares the consequences of the different choices in detail. Would it help a chess program to do that? Human chess players do it.
Beyond that is considering the set Legals(p) of legal moves in position p as an object. A human considers his set of choices and doesn't just consider each choice individually. A chess position is called ``cramped'' if there are few non-disastrous moves, and it is considered useful to cramp the opponent's position even if one hasn't other reasons for considering the position bad for the opponent. Very likely, a program that could play as well as Deep Blue but doing as much computation would need a more elaborate choice structure, i.e. more free will. For example, one fluent of chess positions, e.g. having an open file for a rook, can be regarded as giving a better position than another without assigning numerical values to positions.