This can be done by taking Advantage of LISP's Theoretical Foundation.

As soon as pure LISP took its present form, it became apparent that properties of LISP functions should be provable by algebraic manipulation together with an appropriate form of mathematical induction. This gave rise to the goal of creating a mathematical theory of computation that would lead to computer checked proofs [McC62] that programs meet their specifications. Because LISP functions are functions, standard logical techniques weren't immediately applicable, although recursion induction [McC63] quickly became available as an informal method. The methods of [Kle52] might have been adopted to proving properties of programs had anyone who understood them well been properly motivated and understood the connections.

The first adequate formal method was based on Cartwright's thesis [Car77], which permits a LISP function definition such as

to be replaced by a first order sentence

without first having to prove that the program terminates for any
lists *u* and *v*. The proof of termination has exactly the same
form as any other inductive proof. See also [CM79].

The Elephant formalism (McCarthy 1981 forthcoming) supplies a second method appropriate for sequential LISP programs. Boyer and Moore [BM79] provide proof finding as well as proof checking in a different formalism that requires a proof that a function is total as part of the process of accepting its definition.

I should say that I don't regard the LCF methods as adequate, because the ``logic of computable functions'' is too weak to fully specify programs.

These methods (used informally) have been succesfully taught as part of the LISP course at Stanford and will be described in the textbook (McCarthy and Talcott 1980). It is also quite feasible to check the proofs by machine using Richard Weyhrauch's FOL interactive proof-checker for first order logic, but practical use requires a LISP system that integrates the proof checker with the interpreter and compiler.

The ultimate goal of computer proof-checking is a system that will be used by people without mathematical inclination simply because it leads more quickly to programs without bugs. This requires further advances that will make possible shorter proofs and also progress in writing the specifications of programs.

Probably some parts of the specifications such as that the program terminates are almost syntactic in their checkability. However, the specifications of programs used in AI work require new ideas even to formulate. I think that recent work in non-monotonic reasoning will be relevant here, because the fact that an AI program works requires jumping to conclusions about the world in which it operates.

While pure LISP and the simple form of the ``program feature'' are readily formalized, many of the fancier features of the operational LISP systems such as Interlisp, Maclisp and Lisp Machine LISP [WM78] are harder to formalize. Some of them like FEXPRs require more mathematical research, but others seem to me to be kludges and should be made more mathematically neat both so that properties of programs that use them can be readily proved and also to reduce ordinary bugs.

The following features of present LISP systems and proposed extensions require new methods for correctness proofs:

- Programs that involve re-entrant list structure. Those that
don't involve
*rplaca*and*rplacd*such as search and print programs are more accessible than those that do. I have an induction method on finite graphs that applies to them, but I don't yet know how to treat*rplaca*, etc. Induction on finite graphs also has applications to proving theorems about flowchart programs. - No systematic methods are known for formally stating and proving properties of syntax directed computations.
- Programs that use macro expansions are in principle doable via axiomatizations of the interpreter, but I don't know of any actual formal proofs.
- No techniques exist for correctness proofs of programs involving lazy evaluators.
- Programs with functional arguments are in principle accessible by Dana Scott's methods, but the different kinds of functional arguments have been treated only descriptively and informally.
- Probably the greatest obstacle to making proof-checking a useful
tool is our lack of knowledge of how to express the specifications
of programs. Many programs have useful partial specifications -
they shouldn't loop or modify storage that doesn't belong to them.
A few satisfy algebraic relations, and this includes compilers.
However, programs that interact with the world have specifications
that involve assumptions about the world. AI programs in general
are difficult to specify; most likely their very specification
involves default and other non-monotonic reasoning. (See [McC80].)

Mon Mar 22 17:10:06 PST 1999