next up previous
Next: The Common Sense Informatic Up: FROM HERE TO Previous: FROM HERE TO

What is Human-Level AI?

 

The first scientific discussion of human level machine intelligence was apparently by Alan Turing in the lecture [Turing, 1947]. The notion was amplified as a goal in [Turing, 1950], but at least the latter paper did not say what would have to be done to achieve the goal.

Allen Newell and Herbert Simon in 1954 were the first people to make a start on programming computers for general intelligence. They were over-optimistic, because their idea of what has to be done to achieve human-level intelligence was inadequate. The General Problem Solver (GPS) took general problem solving to be the task of transforming one expression into another using an allowed set of transformations.

Many tasks that humans can do, humans cannot yet make computers do. There are two approaches to human-level AI, but each presents difficulties. It isn't a question of deciding between them, because each should eventually succeed; it is more a race.

  1. If we understood enough about how the human intellect works, we could simulate it. However, we don't have have sufficient ability to observe ourselves or others to understand directly how our intellects work. Understanding the human brain well enough to imitate its function therefore requires theoretical and experimental success in psychology and neurophysiology. gif See [Newell and Simon, 1972] for the beginning of the information processing approach to psychology.
  2. To the extent that we understand the problems achieving goals in the world presents to intelligence we can write intelligent programs. That's what this article is about.

What problems does the world present to intelligence? More narrowly, we consider the problems it would present to a human scale robot faced with the problems humans might be inclined to relegate to sufficiently intelligent robots. The physical world of a robot contains middle sized objects about which its sensory apparatus can obtain only partial information quite inadequate to fully determne the effects of its future actions. Its mental world includes its interactions with people and also meta-information about the information it has or can obtain.

Our approach is based on what we call the common sense informatic situation. In order to explain the common sense informatic situation, we contrast it with the bounded informatic situation that characterizes both formal scientific theories and almost all (maybe all) experimental work in AI done so far.gif

A formal theory in the physical sciences deals with a bounded informatic situation. Scientists decide informally in advance what phenomena to take into account. For example, much celestial mechanics is done within the Newtonian gravitational theory and does not take into account possible additional effects such as outgassing from a comet or electromagnetic forces exerted by the solar wind. If more phenomena are to be considered, a person must make a new theory. Probabilistic and fuzzy uncertainties can still fit into a bounded informatic system; it is only necessary that the set of possibilities (sample space) be bounded.

Most AI formalisms also work only in a bounded informatic situation. What phenomena to take into account is decided by a person before the formal theory is constructed. With such restrictions, much of the reasoning can be monotonic, but such systems cannot reach human level ability. For that, the machine will have to decide for itself what information is relevant. When a bounded informatic system is appropriate, the system must construct or choose a limited context containing a suitable theory whose predicates and functions connect to the machine's inputs and outputs in an appropriate way. The logical tool for this is nonmonotonic reasoning.


next up previous
Next: The Common Sense Informatic Up: FROM HERE TO Previous: FROM HERE TO

John McCarthy
Sun Apr 19 15:21:34 PDT 1998