Recursion and iteration
You will not find primitives for expressing iteration in Lisp (although syntactic
extensions which look like iteration may exist in the Lisp you are using ... in
fact, they will be turned into recursions). This is partly because of Lisp's
origins in lambda-calculus and partly because long ago, Lisp implementors
realised that a special case of recursion can be treated as iteration. Stated the
other way around: iteration is a degenerate case of recursion. We will see what
this means in practice in this page.
Raising an integer to an integer power
This example is instructive for two reasons: it can be used to illustrate the
conversion of recursion to iteration (so called tail-recursion) and to illustrate a
nice algorithmic improvement at the cost of a less transparent program.
We begin by noting the equation a^n = a^(n-1) * a and that a^0
= 1. From this, it is fairly straightforward to derive a program:
(defun pow (a n)
(if (= n 0)
1
(* a (pow a (- n 1)))))
If we use the method developed earlier to explain its execution, then we see
the following:
(pow 3 4)
=> (* 3 (pow 3 3))
=> (* 3 (* 3 (pow 3 2)))
=> (* 3 (* 3 (* 3 (pow 3 1))))
=> (* 3 (* 3 (* 3 (* 3 (pow 3 0)))))
=> (* 3 (* 3 (* 3 (* 3 1))))
=> (* 3 (* 3 (* 3 3)))
=> (* 3 (* 3 9))
=> (* 3 27)
81
This program has a significant inefficiency, in that the number of recursive
calls is proportional to the argument n and hence, the depth of the stack is
proportional to n. In algorithmic analysis terms, it is a linear or O(n)
algorithm. In a procedural language such as FORTRAN, or Pascal, or C,
although the program could be expressed in the same way, it would be more
idiomatic to use a looping construct. Of course, it is well known that loops are
a degenerate case of recursion, and one particular case of recursion can be
optimized into a loop by the compiler: indeed, it is a feature programmers rely
upon. This case is when the last function called in a function's body is itself.
It is not necessary to call the function again, instead, we can just
jump back to the beginning of the function. We can rewrite
pow
to permit this optimization by adding a third
parameter in which we accumulate the result:
(defun pow (a n x)
(if (= n 0)
x
(pow a (- n 1) (* a x))))
For this to work, the value of x in the initial call must be 1
.
Now the evaluation proceeds like this:
(pow 3 4 1)
=> (pow 3 3 3)
=> (pow 3 2 9)
=> (pow 3 1 27)
=> (pow 3 0 81)
81
Now the number of recursive "jumps" is proportional to the argument n, as is
the number of multiplications we carry out - as before - but the depth of the
stack is fixed.
We can reduce the number of recursive calls and the number of multiplications
by noting the following equation: a^n = (a^(n/2))^2, n in 2Z. Hence,
we can derive another program:
(defun pow (a n)
(if (= n 0)
1
(let ((r (pow a (/ n 2))))
(if (= 0 (% n 2))
(* r r)
(* a r r)))))
Abusing our now familiar visualisation technique:
(pow 3 4)
=> ...(pow 3 2)
=> ... ... (pow 3 1)
=> ... ... ... (pow 3 0)
=> ... ... ... 1
=> ... ... (* 3 1 1)
=> ... (* 3 3)
=> (* 9 9)
81
Although this is not markedly different from the earlier examples, the
important point to note is that sometimes the intermediate result is squared.
In this particular example, we have the same number of multiplications as in
the previous version of pow
, but consider the case of
(pow 3 8)
. That can be achieved with one more
multiplication here, while the previous version needs another four. In fact, the
number of multiplications is proportional to the number of times the exponent
can be divided by two, that is, its logarithm in base 2. Thus, this is called a
logarithmic or O(log n) algorithm.
Reversing a list revisited
Now that we have been sensitized to the analysis of recursive algorithms and,
in particular, the utility of conversion of proper recursion into tail recursion,
we look again at the definition of reverse given in function definition:
(defun reverse-list (l)
(if (null l)
()
(append (reverse-list (cdr l)) (list (car l)))))
The pattern of this function is identical to that of the power function
immediately above. There is a recursive call with a reduced argument and the
results of the recursive call are processed to build up the result. Thus, it
should be obvious that reverse-list
does O(n)
append
operations. However, let us consider a definition of
append
:
(defun append (l1 l2)
(if (null l1)
l2
(cons (car l1) (append (cdr l1) l2))))
This, in turn, is linear in the cons
operation, which is the
most basic list allocation operator. Thus, reverse-list
does
O(n^2) cons
operations to reverse a list of length n. A much
more efficient solution is possible if we rewrite reverse-list
to
be tail recursive: the key idea is that as we take items from the front of the
source list, they can be put on to the front of a result list. When the source
list is exhausted, the result list will be the answer:
(defun reverse-list (l x)
(if (null l)
x
(reverse-list (cdr l) (cons (car l) x))))
This version is both tail recursive (the last function called is itself ) and uses a
number of list cells proportional to the length of the original list, ie. it is O(n).