next up previous
Next: About this document ...

DARTMOUTH AND BEYOND
John McCarthy, Stanford University

The symbolic role of the Dartmouth Summer Workshop on Artificial Intelligence in establishing AI as a field of research was more important than the specific results obtained at the meeting. I didn't expect this.

The most important results presented at the meeting were those of Newell, Simon, and Shaw. Their work, which had been done previously, included the list processing system IPL and its use to program their Logic Theory Machine which corresponded to protocols of subjects given the task of proving logical theorems as pure symbolic manipulation, i.e. with no explanation of logic.

Alex Bernstein of IBM reported on his design for a full game chess playing program. My discovery of the alpha-beta heuristic for chess.

The results obtained at the meeting included Minsky's ideas for a geometry theorem prover that only tried to prove sentences true in a diagram, Solomonoff's work on algorithmic complexity. My work on logical AI only started two years later.

PREHISTORY OF THE DARTMOUTH WORKSHOP

The four organizers of the 1956 Dartmouth Workshop on artificial intelligence were John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.

My own interest in AI was triggered by attending the September 1948 Hixon Symposium held at Caltech. At this symposium, the computer and the brain were compared, the comparison being rather theoretical, since there wern't any stored programmed computers yet. The idea of intelligent computer programs isn't in the proceedings, though maybe it was discussed. Turing already had the idea in 1947. I developed some ideas about intelligent finite automata but found them unsatisfactory and didn't publish. My Princeton PhD thesis was about differential equations.

Marvin Minsky was independently interested in AI and in 1950, while a senior at Harvard built, along with Dean Edmunds, built a simple neural net learning machine. At Princeton, Minsky pursued his interest in AI, and his 1954 PhD thesis established the criterion for a neuron to be a universal computing element.

Claude Shannon proposed a chess program in 1950, built many small relay machines exhibiting some features of intelligence, one being a mouse searching a maze to find a goal.

In the summer of 1952 Shannon supported Minsky and me at Bell Telephone Laboratories. The result of my efforts was a paper on the inversion of functions defined by Turing machines. I was unsatisfied with this approach to AI also, because it didn't permit the direct expression of facts about the world.

Also in 1952 Shannon and I invited a number of researchers to contribute to a volume entitled Automata Studies that finally came out in 1956.

In the summer of 1952 Shannon supported Minsky and me at Bell Telephone Laboratories. The result of my efforts was a paper on the inversion of functions defined by Turing machines. I was unsatisfied with this approach to AI also.

Also in 1952 Shannon and I invited a number of researchers to contribute to a volume entitled Automata Studies that finally came out in 1956.

I came to Dartmouth College in 1954 and was invited by Nathaniel Rochester of IBM to spend the summer of 1955 in his Information Research Department in Poughkeepsie, NY. Rochester had been the designer of the IBM 701 computer. Rochester became interested in AI and his department sponsored important work until IBM had a fit of stupidity in 1959.

That summer Minsky, Rochester, Shannon, and I proposed the Dartmouth workshop. The proposal to the Rockefeller Foundation was written in August 1955, and is the source of the term artificial intelligence. The term was chosen to nail the flag to the mast, because I (at least) was disappointed at how few of the papers in Automata Studies dealt with making machines behave intelligently. We wanted to focus the attention of the participants.

The original idea of the proposal was that the participants would spend two months at Dartmouth working collectively on AI, and we hoped would make substantial advances.

It didn't work that way for three reasons. First the Rockefeller Foundation only gave us half the money we asked for. Second, the participants all had their own research agendas and weren't much deflected from them. Therefore, the participants came to Dartmouth at varied times and for varying lengths of time. The main reason is that AI presents many more difficulties than were known in 1956.

WHAT HAPPENED AT DARTMOUTH?

Newell and Simon were the stars--with list processing and the logic theory machine.

Minsky's diagram based geometry theorem proving idea. Rochester got Herbert Gelernter to do it, but IBM had a fit of stupidity in 1959 and lost its advantage in AI.

Solomonoff's start on algorithmic complexity.

Alex Bernstein's chess program. My alpha-beta heuristic for chess-like games.

My own ideas on logical AI came two years later.

A SAMPLE OF WHAT AI HAS ACCOMPLISHED?

As in any scientific field, we need to distinguish basic research from applications. At present too large a fraction of the work is going into applications.

Basic AI research accomplishments include success in chess as a drosophila for AI (and by way of contrast, non-success in go, formalisms for action including situation calculus and event calculus, non-monotonic programming systems like Microplanner and Prolog, theory of non-monotonic reasoning, including circumscription and logic of defaults. The surprising success of propositional satisfiability programs and their widespread applications. The Causal Calculator.

Applied results include many classification applications, robotics, computer vision, and driving a vehicle.

There quite a few accomplishments I would include if reminded. I almost forgot propositional satisfiability. Apologies to the overlooked.

WHEN WILL WE HAVE HUMAN-LEVEL AI?--Kurzweil's blunder.

This is the wrong question. We can't yet extrapolate from present AI to human-level in 2029 or any other fixed date.

The right question. We will reach human-level AI when someone solves some basic problems. Maybe five years--maybe 500 years. The genetic code came 100 years after Mendel.

What problems?

How to do nonmonotonic reasoning in general. Formalizing entities that don't have if-and-only-if definitions. Sufficiently self-aware systems. I know more, but most likely there are problems no-one knows about yet.

Three classical problems of AI are the frame problem of how to avoid specifying what doesn't change when an event occurs, the qualification problem of avoiding specifying every niggling qualification for an action to be successful, and the ramification problem of avoiding specifying all the side effects of an event. All three have been solved in important contexts and for important applications, but I think none have been solved at the human leve of intelligence.

These ideas require extensions to logic, but much efficiency has been achieved by restrictions. We need logical systems that can reason about their own methods.

THREE OF THE PROBLEMS BETWEEN US AND HUMAN-LEVEL AI?

Non-monotonic reasoning. If I hire you to build me a bird cage, you must presume the bird can fly. But if you then learn my bird is a penguin, you can no longer infer that. Non-monotonic logic has formalized these, e.g. with circumscription, and appled them to cases like birds, but a general way of doing non-mon is yet to come.

Logical treatment of partly defined objects. Examples: The snow and rock that make up Mount Everest, the wants of the U.S. Idea: An object ill-defined in general, may have an if-and-only-if definition in a particular context. Start from the easy contexts. Example: mother to a small child.

Self awareness. Example: How do you know you can't infer whether George Bush is sitting or standing at this moment. How should a robot know?

I know several more problems AI has to solve, e.g. formalized contexts. There are probably several important problems no-one knows about yet. How long will it be before they are identified and solved--or turn out to be automatic consequences of a general method?

These ideas are elaborated in articles reprinted on my web site http://www-formal.stanford.edu/jmc/.

HOW LONG TO HUMAN LEVEL?

Maybe five years. Maybe 500 years. If your interest is in dates, begin by speculating about when general non-monotonic reasoning will be understood.

Human-level AI is more likely to result from individual basic research, theoretical and experimental, than from research programmes recommended by commmitees. None of my work was on a topic suggested by a committee.

A MODEST PROPOSAL

This proposal is based on the proposition that new concepts are probably necessary for human-level AI, and these concepts are more likely to come from individuals rather than programmes proposed by committees.

The proposal is for a programme of individual fellowships.

Each fellowship supports the individual for five years.

The programme is aimed at graduate students or new postdocs.

The recipient can move among institutions.

Except in unusual circumstances no intermediate progress reports are required.

The programme could be run by NSF or DARPA or started with the $7 million that AAAI has left.




next up previous
Next: About this document ...
John McCarthy
2006-11-27