Structure and Interpretation of Computer Programs (10 page)

Read Structure and Interpretation of Computer Programs Online

Authors: Harold Abelson and Gerald Jay Sussman with Julie Sussman

BOOK: Structure and Interpretation of Computer Programs
4.08Mb size Format: txt, pdf, ePub

We say that
R
(
n
) has order of growth θ(
f
(
n
)), written
R
(
n
) = θ(
f
(
n
)) (pronounced “theta of
f
(
n
)”), if there are
positive constants
k
1
and
k
2
independent of
n
such that

k
1
f
(
n
) ≤
R
(
n
) ≤
k
2
f
(
n
)

for any sufficiently large value of
n
. (In other
words, for large
n
, the value
R
(
n
) is sandwiched between
k
1
f
(
n
)
and
k
2
f
(
n
).)

For instance, with the linear recursive process for computing
factorial described in section 
1.2.1
the
number of steps grows proportionally to the input
n
. Thus, the
steps required for this process grows as θ(
n
). We also saw
that the space required grows as θ(
n
). For the
iterative
factorial, the number of steps is still θ(
n
) but the space is
θ(1) – that is, constant.
36
The
tree-recursive Fibonacci computation requires
θ(
φ
n
) steps and space θ(
n
), where
φ
is the
golden ratio described in section 
1.2.2
.

Orders of growth provide only a crude description of the behavior of a
process. For example, a process requiring
n
2
steps and a process
requiring 1000
n
2
steps and a process requiring 3
n
2
+ 10
n
+ 17 steps
all have θ(
n
2
) order of growth. On the other hand, order of
growth provides a useful indication of how we may expect the behavior
of the process to change as we change the size of the problem. For a
θ(
n
) (linear) process, doubling the size will roughly double the amount
of resources used. For an
exponential process, each increment in
problem size will multiply the resource utilization by a constant
factor. In the remainder of section 
1.2
we will examine two
algorithms whose order of growth is
logarithmic, so that doubling the
problem size increases the resource requirement by a constant amount.

Exercise 1.14.
  Draw the tree illustrating the process generated by the
count-change
procedure of section 
1.2.2
in making
change for 11 cents. What are the orders of growth of the space and
number of steps used by this process as the amount to be changed
increases?

Exercise 1.15.
  
The sine of an angle (specified in
radians) can be computed by making use of the approximation
sin
x

x
if
x
is
sufficiently small, and the trigonometric identity

to reduce the size of the argument of
sin
. (For
purposes of this exercise an angle is considered “sufficiently
small” if its magnitude is not greater than 0.1 radians.) These
ideas are incorporated in the following procedures:

(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
   (if (not (> (abs angle) 0.1))
       angle
       (p (sine (/ angle 3.0)))))

a.  How many times is the procedure
p
applied when
(sine 12.15)
is evaluated?

b.  What is the order of growth in space and number of steps (as a
function of 
a
) used by the process generated by the
sine
procedure when
(sine a)
is evaluated?

1.2.4  Exponentiation

Consider the problem of computing the exponential of a given number.
We would like a procedure that takes as arguments a base
b
and a
positive integer exponent
n
and computes
b
n
. One way to do this
is via the recursive definition

b
n
=
b
·
b
n
-1
b
0
= 1

which translates readily into the procedure

(define (expt b n)
  (if (= n 0)
      1
      (* b (expt b (- n 1)))))

This is a linear recursive process, which requires θ(
n
) steps
and θ(
n
) space. Just as with factorial, we can readily
formulate an equivalent linear iteration:

(define (expt b n)
  (expt-iter b n 1))
(define (expt-iter b counter product)
  (if (= counter 0)
      product
      (expt-iter b
                (- counter 1)
                (* b product)))) 

This version requires θ(
n
) steps and θ(1) space.

We can compute exponentials in fewer steps by using successive
squaring. For instance, rather than computing
b
8
as

b
· (
b
· (
b
· (
b
· (
b
· (
b
· (
b
·
b
))))))

we can compute it using three multiplications:

b
2
=
b
·
b
b
4
=
b
2
·
b
2
b
8
=
b
4
·
b
4

This method works fine for exponents that are powers of 2. We can
also take advantage of successive squaring in computing exponentials
in general if we use the rule

b
n
= (
b
n
/2
)
2
          if
n
is even
b
n
=
b
·
b
n
-1
         if
n
is odd

We can express this method as a procedure:

(define (fast-expt b n)
  (cond ((= n 0) 1)
        ((even? n) (square (fast-expt b (/ n 2))))
        (else (* b (fast-expt b (- n 1))))))

where the predicate to test whether an integer is even is defined in terms of
the
primitive procedure
remainder
by

(define (even? n)
  (= (remainder n 2) 0))

The process evolved by
fast-expt
grows logarithmically with
n
in both space and number of steps. To see this, observe that
computing
b
2
n
using
fast-expt
requires only one more
multiplication than computing
b
n
. The size of the exponent we can
compute therefore doubles (approximately) with every new
multiplication we are allowed. Thus, the number of multiplications
required for an exponent of
n
grows about as fast as the logarithm
of
n
to the base 2. The process has θ(
log
n
)
growth.
37

The difference between θ(
log
n
) growth and θ(
n
) growth
becomes striking as
n
becomes large. For example,
fast-expt
for
n
= 1000 requires only 14 multiplications.
38
It is also possible to use the idea of
successive squaring to devise an iterative algorithm that computes
exponentials with a logarithmic number of steps
(see exercise 
1.16
), although, as is often
the case with iterative algorithms, this is not written down so
straightforwardly as the recursive algorithm.
39

Exercise 1.16.
  Design a procedure that evolves an iterative exponentiation process
that uses successive squaring and uses a logarithmic number of steps,
as does
fast-expt
. (Hint: Using the observation that
(
b
n
/2
)
2
= (
b
2
)
n
/2
, keep, along with the exponent
n
and the
base
b
, an additional state variable
a
, and define the state
transformation in such a way that the product
a
b
n
is unchanged
from state to state. At the beginning of the process
a
is taken to
be 1, and the answer is given by the value of
a
at the end of the
process. In general, the technique of defining an
invariant
quantity
that remains unchanged from state to state is a powerful way
to think about the
design of iterative algorithms.)

Exercise 1.17.
  The exponentiation algorithms in this section are based on performing
exponentiation by means of repeated multiplication. In a similar way,
one can perform integer multiplication by means of repeated addition.
The following multiplication procedure (in which it is assumed that
our language can only add, not multiply) is analogous to the
expt
procedure:

(define (* a b)
  (if (= b 0)
      0
      (+ a (* a (- b 1)))))

This algorithm takes a number of steps that is linear in
b
.
Now suppose we include, together with addition, operations
double
,
which doubles an integer, and
halve
, which divides an (even)
integer by 2. Using these, design a multiplication procedure analogous
to
fast-expt
that uses a logarithmic number of steps.

Exercise 1.18.
  Using the results of exercises 
1.16
and 
1.17
, devise a procedure that generates an iterative
process for multiplying two integers in terms of adding, doubling, and
halving and uses a logarithmic number of steps.
40

Exercise 1.19.
  
There is a clever algorithm for computing the Fibonacci numbers in
a logarithmic number of steps.
Recall the transformation of the state variables
a
and
b
in the
fib-iter
process of
section 
1.2.2
:
a

a
+
b
and
b

a
. Call this transformation
T
, and observe that applying
T
over
and over again
n
times, starting with 1 and 0, produces the pair
F
i
b
(
n
+ 1) and
F
i
b
(
n
). In other words, the Fibonacci
numbers are produced by applying
T
n
, the
n
th power of the
transformation
T
, starting with the pair (1,0). Now consider
T
to be the special case of
p
= 0 and
q
= 1 in a family of
transformations
T
p
q
, where
T
p
q
transforms the pair (
a
,
b
)
according to
a

b
q
+
a
q
+
a
p
and
b

b
p
+
a
q
. Show
that if we apply such a transformation
T
p
q
twice, the effect is
the same as using a single transformation
T
p
'
q
'
of the same form,
and compute
p
' and
q
' in terms of
p
and 
q
. This gives us an
explicit way to square these transformations, and thus we can compute
T
n
using successive squaring, as in the
fast-expt
procedure. Put this all together to complete the following procedure,
which runs in a logarithmic number of steps:
41

(define (fib n)
  (fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
  (cond ((= count 0) b)
        ((even? count)
         (fib-iter a
                   b
                   <
??
>      
; compute 
p
'
                   <
??
>      
; compute 
q
'
                   (/ count 2)))
        (else (fib-iter (+ (* b q) (* a q) (* a p))
                        (+ (* b p) (* a q))
                        p
                        q
                        (- count 1)))))

Other books

His Firefly Cowgirl by Beth Williamson
The Wolf Worlds by Chris Bunch, Allan Cole
Remember by Mihai, Cristian
Dangerous by Shannon Hale