In the first edition of Dreyfus's book there were some challenges to AI. Dreyfus said computers couldn't exhibit ``ambiguity tolerance'', ``fringe consciousness'' and ``zeroing in''. These were left so imprecise that most readers couldn't see any definite problem at all. In the succeeding 30 years Dreyfus has neither made these challenges more precise nor proposed any new challenges, however imprecise. It's a pity, because AI could use a critic saying, ``Here's the easiest thing I don't see how you can do''. That part of Dreyfus's research program has certainly degenerated.
However, I can give a definite meaning to the phrase ``ambiguity tolerance'' that may not be too far from Dreyfus's vague idea, and with which formalized nonmonotonic reasoning can deal. The idea is that a concept that may be ambiguous in general is to be taken by default as unambiguous in a particular case unless there is reason to do otherwise.
Here's an example.
Suppose that some knowledge engineer has the job of making an adviser for assistant district attorneys. The prosecutor answers some questions about the facts of the case, and the program suggests asking for indictments for certain crimes. We suppose that attempting to bribe a public official is one of these crimes.
We ask whether the knowledge engineer must have anticipated the following three possible defenses against the charge, i.e. have decided whether the following circumstances still justify an indictment.
There may be further potential ambiguities in the statute. If we demand that the knowledge engineer have resolved all of them before he can write his expert system, we are asking for the impossible. Legislators, lawyers and judges don't see all the ambiguities in advance.
Notice that in most cases of bribing a public official, there was a specific individual, and he really was a public official and this was really known to the defendant. Very likely, the legislators had not thought of any other possibilities. The nonmonotonic reasoning approach to ambiguity tolerance says that by default the statute is unambiguous in a particular case. Indeed this is how the law works. The courts will not invalidate a law because of a general amiguity; it has to be ambiguous in a significant way in the particular case.
Since the expert system writer cannot anticipate all the possible ambiguities, he must make his system ambiguity tolerant.
When an ambiguity is actually pointed out to the expert system, it would be best if it advised looking at cases to see which way the statute had been interpreted by judges. I don't know whether to be a useful adviser in statutory criminal law, the expert system would have to have a library of cases and the ability to reason from them.
I have not written logical formulas for ambiguity tolerance, i.e. expressing the default that a concept, possibly ambiguous in general, is to be considered unambiguous in particular cases unless there is evidence to the contrary. However, I would be strongly motivated to give it high priority if Dreyfus were to offer to bet money that I can't.
To conclude: Dreyfus has posed various challenges to AI from time to time, but he doesn't seem to make any of them precise. Here is my challenge to Dreyfus, whereby he might rescue his research program from degeneration.
What is the least complex intellectual behavior that you think humans can do and computers can't? It would be nice to have more details than were given in connection with ``ambiguity tolerance'' and ``zeroing in''.