next up previous
Next: About this document ...

HUMAN-LEVEL AI IS HARDER THAN IT SEEMED IN 1955
John McCarthy, Stanford University

I hoped the 1956 Dartmouth summer workshop would make major progress.

John von Neumann was busy dying. Anyway he disapproved of the Newell-Simon work on chess, probably of AI in general.

If my 1955 hopes had been realized, human-level AI would have been achieved before many (most?) of you were born.

Marvin Minsky, Ray Solomonoff, and I made progress that summer. Newell and Simon showed their previous work on IPL and the logic theorist. Lisp was based on IPL+Fortran+abstraction.

AI is OK--mostly

Chess programs catch some of the human chess playing abilities but rely on the limited effective branching of the chess move tree. The ideas that work for chess are inadequate for go.

alpha-beta pruning characterizes human play, but it wasn't noticed by early chess programmers--Turing, Shannon, Pasta and Ulam, and Bernstein. We humans are not very good at identifying the heuristics we ourselves use. Approximations to alpha-beta used by Samuel, Newell and Simon, McCarthy. Proved equivalent to minimax by Hart and Levine, independently by Brudno. Knuth gives details.

Theorem proving--Newell-Simon, Boyer-Moore, resolution, SAT solvers for propositional calculus.

Logical AI, reduced logical AI in various forms.

CYC

RKB?, Semantic web?

DARPA car race

but it could be better.

OBSTACLES

One's time estimates are based on the obstacles one can see.

My 1958 ``Programs with common sense'' made projections (promises?) that no-one has yet fulfilled.

That paper proposed that theorem proving and problem solving programs should reason about their own methods. I've tried unsuccessfully. Unification goes in the wrong direction.

There has been considerable progress in logical AI, but not enough.

BAD IDEAS--alias my prejudices

Basing machine learning on linear discriminations.

Basing ontology on hierarchies of unary predicates, e.g. semantic networks.

Basing theorem proving on resolution. Getting statements into clausal form throws away information.

Entering knowledge without logic (RKF).

Also: XML (They should have used Lisp lists), TeX macros, committee science

EXCUSES

We aren't smart enough. An Einstein might have done better.

They didn't give us enough money. Not the main problem.

It was 100 years from Mendel to the genetic code.

Inadequate idea, e.g. GPS (general problem solver).

The neural net people aren't there either.

Too much grabbing for what could be applied in the short term. The call for this symposium exhibits that fault.

COMMITTEE SCIENCE

``This formula is all very well, Herr Einstein, but we don't see it increasing the German GDP in the next 10 or even 20 years. Develop some applications and then submit another proposal.''

``We are forming a committee on theory and applications of co-ordinate transformations. We suggest, Herr Einstein, that you contact the chairman of the committee.''

I was treated to a talk this morning that emphasized what the funders want. That's not the path to scientific progress. Also it wasn't even clear that the funders know what they want. I hope NSF stays out of these committee science consortia and concentrates on proposals from individuals. Computer science, and especially AI, seems particularly afflicted with committee science.

WHITHER?

Provers that reason about their methods.

Adapt mathematical logic to express common sense. A continuing problem.

COMMON SENSE IN MATHEMATICAL LOGIC--my panacea

Example formula (not a whole theory):


Problems:

Non-monotonic reasoning

Contexts

Approximate objects and theories

If the above explanation is perfectly clear, you don't need to take a course in logical AI.

People who put knowledge into computers need mathematical logic, including quantifiers, as much as engineers need calculus. Alas, logic for freshman isn't developed beyond propositional calculus.




next up previous
Next: About this document ...
John McCarthy
2006-11-27