The four organizers of the 1956 Dartmouth Workshop on artificial intelligence were John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The workshop has some prehistory.
My own interest in AI was triggered by attending the September 1948 Hixon Symposium on Cerebral Mechanisms in Behavior held at Caltech where I was starting graduate work in mathematics. At this symposium, the computer and the brain were compared, the comparison being rather theoretical, since the first stored programmed computers were completed only in 1949. The idea of intelligent computer programs isn't in the proceedings of the symposium, although it might have been discussed. I developed some ideas about intelligent finite automata but found them unsatisfactory and didn't publish. Consequently, I wrote my Princeton PhD thesis in differential equations in 1951.
Marvin Minsky was independently interested in what became AI and in 1950, while a senior at Harvard built, along with Dean Edmunds, built a simple neural net learning machine. At Princeton, Minsky pursued his interest in AI, and his 1954 PhD thesis established the criterion for a neuron to be a universal computing element.
Claude Shannon proposed a chess program in 1950, built many small relay machines exhibiting some features of intelligence, one being a mouse searching a maze to find a goal.
In the summer of 1952 Shannon supported Minsky and me at Bell Telephone Laboratories. The result of my efforts was a paper on the inversion of functions defined by Turing machines. I was unsatisfied with this approach to AI also.
Also in 1952 Shannon and I invited a number of researchers to contribute to a volume entitled Automata Studies that finally came out in 1956.
I came to Dartmouth College in 1954 and was invited by Nathaniel Rochester of IBM to spend the summer of 1955 in his Information Research Department in Poughkeepsie, NY. Rochester had been the designer of the IBM 701 computer.
While at IBM, Rochester and I got Minsky and Shannon to join us in proposing the Dartmouth workshop. The proposal, requesting funds from the Rockefeller Foundation was written in August 1955, and is the source of the term artificial intelligence. The term was chosen to nail the flag to the mast, because I (at least) was disappointed at how few of the papers in Automata Studies dealt with making machines behave intelligently. We wanted to focus the attention of the participants.
The original idea of the proposal was that the participants would spend two months at Dartmouth working collectively on AI, and we hoped would make substantial advances.
It didn't work that way for three reasons. First the Rockefeller Foundation only gave us half the meney we asked for. Second, and this is the main reason, the participants all had their own research agendas and weren't much deflected from them. Therefore, the participants came to Dartmouth at varied times and for varying lengths of time.
Two people who might have played important roles at Dartmouth were Alan Turing, who first uderstood that programming computers was the main way to realize AI, and John von Neumann. Turing had died in 1954, and by the summer of 1956 von Neumann was already ill from the cancer that killed him early in 1957.
What did happen that summer at Dartmouth?
Newell and Simon, who only came for a few days, were the stars of the show. They presented the logic theory machine and compared its output with protocols from student subjects. The students were not supposed to understand propositional logic but just to manipulate symbol strings according to the rules they were given. They also described representing formulas by list structures and their IPL language.
I thought list structures were a great idea but didn't like the IPL language and immediately thought of using Fortran augmented by list processing primmitive operations coded in machine language. Fortran manuals existed at the time, but an operational Fortran wasn't quite ready.
Alex Bernstein of IBM presented his chess program under construction. My reaction was to invent and recommend to him alpha-beta pruning. He was unconvinced.
My ideas about representing common sense knowledge and reasoning in mathematical logic were still too ill formed for me to present them. Maybe if there had been some logicians at the meeting I'd have hoped for their interest and help. It was another two years before I was ready to present a paper on the subject.
Minsky presented his idea for a plane geometry theorem prover which would avoid much combinatorial explosion by only attempting to proved statements that were true in a diagram. Nat Rochester took this idea back to IBM with him and set Herbert Gelernter, a new IBM hire, to work on it with me as a consultant. Gelernter developed the Fortran List Processing Language for implementing the prover. In 1958, responding to the fact that FLPL didn't allow recursion and other infelicities, I proposed Lisp.
I remember well only the events at Dartmouth that intersected with my own scientific interests, so this is not a comprehensive account of what went on. Good work that I am ignoring here includes Raymond Solomonoff's work on algorithmic information and E. F. Moore's further development of his idas on automata.
What came out of Dartmouth?
I think the main thing was the concept of artificial intelligence as a branch of science. Just this inspired many people to pursue AI goals in their own ways.
My hope for a breakthrough towards human-level AI was not realized at Dartmouth, and while AI has advanced enormously in the last 50 years, I think new ideas are still required for the breakthrough.
What has happened since 1956?
AI research split, perhaps even before 1956, into approaches based on imitating the nervous system and the engineering approach of looking at what problems the world presents to humans, animals, and machines attempting to achieve goals including survival. Neither has achieved human-level AI. Proposals that one approach should be abandoned and all resources put into the other are silly, as well as being unlikely to happen. I'll confine myself to engineering approaches.
Within the engineering approach, the greatest success has been accomplished in making computer programs for particular tasks, e.g. playing chess and driving an off-the-road vehicle. None of these purport to have achieved general common sense knowledge. Thus the chess programs do not know that they are chess programs. Their ontology consists mainly of particular positions.
The logical AI approach, starting with [#!McC59!#], which was entitled Programs with common sense, is in principle more ambitious. It requires representing facts about the world in languages of mathematical logic and solving problems by logical reasoning. It faces many difficulties, some of which have been overcome, and there are proposals for overcoming others. Nevertheless, there is still not a well accepted plausible plan for reaching human-level AI.
For some years, I have thought mathematical logic needs to be extended in order represent common sense knowledge and reasoning. That extensions are possible may seem paradoxical in the light of Gödel's 1929 completeness theorem for first order logic. (Don't confuse this with his 1931 incompleteness theorem for formalized arithmetic.) The 1929 theorem tells us that any sentence true in all models of some premises has a proof from these premises. Therefore, any genuine extension of logic must allow inferring some sentences that are untrue in some models of the premises.
The various systems of formalized nonmonotonic reasoning do precisely that. They allow inferring sentences true in preferred models of the premises. Human commonsense reasoning is often nonmonotonic, and human-level logical AI requires nonmonotonic reasoning, but how to do this in a sufficiently general way is still undiscovered.
The need for nonmonotonic reasoning is well accepted in AI, although for specific domains, the human designer often decides what interpretations are preferred and relegates only monotonic reasoning to the computer. This is at the cost of generality.
Besides nonmonotonic reasoning, I propose other extensions to logic to be able to do common sense reasoning. These include systems with concepts as objects, systems with contexts as objects, and admitting entities that cannot be characterized by if-and-only-if definitions. I'm sure there's lots more needed before logic fully covers common sense. My proposals are in articles published here and there but all available from my web page http://www-formal.stanford.edu/jmc/.
Besides proposals for extending logic, there are many systems that restrict logic in order to make computation more efficient. I'd prefer to use full logic but want systems that can reason about their own reasoning methods in order to decide on efficient reasoning. After all these years, I still have not been able to make specific proposals.