Computer Science Department
Stanford, CA 94305
JulAugSepOctNovDec , :< 10 0
JanFebMarAprMayJun JulAugSepOctNovDec , :< 10 0
This note presents an example, (roofs-and-boxes), to refute the idea that sequence extrapolation is a paradigmatic problem for AI. This plausible idea was that intelligence predicted the sequence of future sensations from the past sequence of sensations. The roofs-and-boxes example illustrates that intelligence requires knowing about objects in the world and not just about one's history of sensations--even if one's goal is to predict future sensations.
The justification for writing this up many years after I discussed it in lectures is that almost all machine learning research does not undertake to infer structures in the world and not just classify the data. I'll explain this point after giving the example.