SEMINAR ON HUMAN-LEVEL AI

New stuff

When something is no longer new, it will be removed from this section.


Yoav Shoham will lecture on May 23.

Title: Two unrelated thoughts about imporant issues in attaining artificial intelligence.

Abstract:

I'll lead a discussion on two issues that I've thought about a little recently.

One is the tension in AI between the need to abstract and the need to explicate. Although less guilty of it than other fields (for example theoretical AI and economics), AI does tend to create abstractions without being very explicit about the things of which these are abstraction. This has been the case, for example, in nonmon and belief revision. The message I'll try to argue for: explicate.

The other issue is the relative neglect in AI of modeling and reasoning about motivation components of intelligence. We worry about drawing solid inferences, devising efficient plans, but much less about which inferences matter and what goals are worth planning for. As Kundera says, the hardest thing is to know what to want. I'll chat a little about some recent work in this area. The message I'll try to push: formalize motivational attitudes, and pay attention to economics.


Stan Rosenschein lectured on both May 9 and May 16.

Representational Transparency and the Scalability of AI Systems

Stan Rosenschein

Symbolic AI is driven by a simple intuition and a working assumption. The intuition: That higher intelligence rests on rich representational states and general methods for updating and exploiting them. The working assumption: That large-scale AI systems can be built by hand-coding large numbers of facts in a symbolic language semantically transparent to humans.

What if the intuition were valid but the working assumption false? Would there still be a path to scalable AI? What new ways of thinking about representation would be required?


INTRODUCTION

The founders of AI research all had human-level AI as a goal. However, as AI research split up into many subfields and these split further, most research limited its ambitions. When students enter the field of AI, they almost always become attached to some ongoing activity with limited goals.

It is time for thinking about how AI gets to human level from where it is now. It is especially important that some students and other young people think about how this is to be accomplished.

There are lots of ideas, but new ideas are needed.

The Seminar on Human-level AI will meet at 3:15 pm on Fridays in Gates 104.

Potential volunteers to speak should email to John McCarthy (jmc@cs.stanford.edu, 723-4430) or Nils Nilsson (nilsson@cs.stanford.edu, 723-3886).

Student volunteer speakers are welcome.

Alternate viewpoints about how human-level AI is to be achieved are welcome, and we lack connectionist and neural net advocates.

MEETINGS

The first meeting was on Tuesday, 1997 April 1. The speaker was Professor Donald Michie of the University of Edinburgh. The second meeting is Friday, 1997 April 11 at 3:30pm. Professor Nils Nilsson will lead off, and expects to leave half the time for general discussion. Please prepare to participate and (if you wish) make your own proposals for reaching human-level AI. Here's the announcement.


Human-Level AI: Big Ideas Needed

Friday, April 11, 1997, 3:15 p.m., Gates 104

Abstract: It isn't for lack of filling in the details of the current stock of AI ideas that we don't yet have human level AI. We need more "Big Ideas, " and we need more people looking for them. I will propose projects that I think will stimulate invention. Instead of writing a thesis on "A New Method of Using MDL in Bayes Net Learning,'' write one on "Regeneration: A Technique for Automatic Construction of Architectural Layers in a Robot."

Seminar attendees might want to look at:

1) "Eye on the Prize" (preprint of his AI Magazine article). Pointed to from Nilsson's home page.

2) His piece, "Toward Flexible and Robust Robots," in "Challenge Problems for AI" published in Proc. of AAAI-96. Here's a postscript version.

Here are the the slides for Nils Nilsson's 1997 April 11 lecture..


John McCarthy spoke on April 18, Rich Fikes on April 25, and that's all the speakers we have so far. McCarthy spoke at the conference on on knowledge representation in 1996 November with the title From Here to Human-Level AI. His April 18 talk didn't not repeat the KR96 talk but focussed on two of the many problems that must be solved to reach human-level AI. One of them is elaboration tolerance , and the other is combining logic with visualization.

The idea of choosing just two topics was to allow increased discussion.

Maybe someone will argue that these two capabilities are incorrectly formulated, that they are not needed for human-level AI, or that they will come as byproducts of something else. No-one did.

Elaboration tolerance: A human with some facts can accept modifications of these facts expressed in natural language. For example, given the facts about the seminar, you would understand the elaboration that everyone will be searched for weapons at the door of the seminar. Some logical formalizations are more elaboration tolerant than others, and connectionist and neural net formalizations often seem to have no elaboration tolerance at all.

Logic and visualization: Consider the counterfactual conditional sentence "If another car had come over the hill when you passed that Mercedes, there would have been a head-on collision". Evaluating the truth of this statement involves both logical reasoning and some kind of abstract mental simulation as well as observing the current situation.

Up to: Computer Science Department

The number of hits on this page since 1997 April 2.