next up previous
Next: The Roofs and Boxes Up: ROOFS AND BOXES Previous: ROOFS AND BOXES

Introduction

 

This note presents an example, roofs-nd-oxes, to refute the idea that sequence extrapolation is the paradigmatic problem for AI. This plausible idea was that intelligence predicted the sequence of future sensations from the past sequence of sensations. This idea led to programs for sequence extrapolation. The first programs predicted sequences of integers generated by polynomials, and later programs dealt with sequences generated by programs that included conditional expressions.

Programs for sequence extrapolation were written by Edward Fredkin, Donald Michie, Jan Mycielski and others. I don't have the references yet.

My objection to taking this as a paradigm is that the prediction of the future in real life involves many other kinds of learning than that involved in direct sequence extrapolation. Specifically, human learning often involves the discovery of objects in the environment and their effects on experience.

The roofs-and-boxes example illustrates that intelligence requires knowing about objects in the world and not just about one's history of sensations--even if one's goal is to predict future sensations.



John McCarthy
Wed Sep 9 13:00:51 PDT 1998