Thinking Functionally with Haskell (24 page)

BOOK: Thinking Functionally with Haskell
10.22Mb size Format: txt, pdf, ePub

Evaluation of
x `seq` y
proceeds by first evaluating
x
(to head normal form) and then returning the result of evaluating
y
. If evaluation of
x
does not terminate, then neither does
x `seq` y
. It’s not possible to define
seq
in Haskell; instead Haskell provides it as a primitive function.

Now consider the following version
foldl'
of
foldl
that evaluates its second argument strictly:

foldl' :: (b -> a -> b) -> b -> [a] -> b

foldl' f e [] = e

foldl' f e (x:xs) = y `seq` foldl' f y xs

where y = f e x

Haskell provides the function
foldl'
in the standard prelude (yes, with just this unimaginative name). Now we can define
sum = foldl' (+) 0
, with the consequence that evaluation proceeds in constant space. In fact,
sum
is another prelude function with essentially this definition.

Is it the case that
foldl
is now redundant and can be replaced by the new improved
foldl'
? The answer is in practice yes, but in theory no. It is possible to construct
f
,
e
and
xs
such that
foldl f e xs

foldl' f e xs

However, when
f
is strict (recall that
f
is strict if
f
⊥=⊥) the two expressions do return the same result. The exercises go into details.

Taking the mean

Armed with the above information, let’s now consider a very instructive example: how to compute the average or
mean
of a list of numbers. Surely that is an easy problem, you might think, just divide the sum of the list by the length of the list:

mean :: [Float] -> Float

mean xs = sum xs / length xs

There are lots of things wrong with this definition, not the least of which is that the expression on the right is not well-formed! The function
length
in Haskell has type
[a] -> Int
and we can’t divide a
Float
by an
Int
without performing an explicit conversion.

There is a function in the standard prelude that comes to our aid:

fromIntegral :: (Integral a, Num b) => a -> b

fromIntegral = fromInteger . toInteger

Recall from
Chapter 3
the two conversion functions

toInteger
:: (Integral a) => a -> Integer
fromInteger :: (Num a) => Integer -> a

The first converts any integral type to an integer, and the second converts an integer to a number. Their composition converts an integral number, such as
Int
, to a more general kind of number, such as
Float
.

We can now rewrite
mean
to read

mean :: [Float] -> Float

mean xs = sum xs / fromIntegral (length xs)

The second thing wrong with this definition is that it silently ignores the case of the empty list. What is
0/0
? Either we should identify the failing case with an explicit error message, or else adopt one common convention, which is to agree that the mean of the empty list should be zero:

mean [] = 0

mean xs = sum xs / fromIntegral (length xs)

Now we are ready to see what is
really
wrong with
mean
: it has a space leak. Evaluating
mean [1..1000]
will cause the list to be expanded and retained in memory after summing because there is a
second pointer
to it, namely in the computation of its length.

We can replace the two traversals of the list by one, using a strategy of program optimisation called
tupling
. The idea is simple enough in the present example: define
sumlen
by

sumlen :: [Float] -> (Float,Int)

sumlem xs = (sum xs,length xs)

and then calculate an alternative definition that avoids the two traversals. It is easy to carry out the calculation and we just state the result:

sumlen []
= (0,0)

sumlen (x:xs) = (s+x,n+1) where (s,n) = sumlen xs

The pattern of the definition of
sumlen
should be familiar by now. An alternative definition is
sumlen = foldr f (0,0) where f x (s,n) = (s+x,n+1)

Even better, we can replace
foldr f
by
foldl g
, where
g (s,n) x = (s+x,n+1)

The justification of this step is the law in the previous chapter that said

foldr f e xs = foldl g e xs

for all finite lists
xs
, provided

f x (g y z) = g (f x y) z

f x e = g e x

The verification of these two conditions is left as an exercise.

And that means we can use
foldl'
:

sumlen = foldl' g (0,0) where g (s,n) x = (s+x,n+1)

Now we can replace our heavily criticised definition of
mean
by

mean [] = 0

mean xs = s / fromIntegral n

where (s,n) = sumlen xs

Surely we have now achieved our goal of a constant-space computation for
mean
?

Unfortunately not. The problem is with
sumlen
and it is a little tricky to spot. Expanding the definition out a little, we find

foldl' f (s,n) (x:xs) = y `seq` foldl' f y xs

where y = (s+x,n+1)

Ah, but
y `seq` z
reduces
y
to head normal form and the expression
(s+x,n+1)
is already in head normal form. Its two components are not evaluated until the end of the computation. That means we have to dig deeper with our
seq
s and rewrite
sumlen
in the following way:

sumlen = foldl' f (0,0)

where f (s,n) x = s `seq` n `seq` (s+x,n+1)

Finally, everything in the garden is rosy and we have a computation that runs in constant space.

Two more application operators

Function application is the only operation not denoted by any visible sign. However, Haskell provides two more application operators,
($)
and
($!)
:

infixr 0 $,$!

($),($!) :: (a -> b) -> a -> b

f $ x = f x

f $! x = x `seq` f x

The only difference between
f x
and
f $! x
is that in the second expression the argument
x
is evaluated
before
f
is applied. The only difference between
f x
and
f $ x
is that
($)
(and also
($!)
) is declared to have the lowest binding power of 0 and to associate to the right in expressions. That is exactly what the
fixity
declaration in the first line provides. Why do we want that?

The answer is that we can now write, for example

process1 $ process2 $ process3 input

instead of having to write either of

process1 (process2 (process3 x))

(process1 . process2 . process3) x

It is undeniable that
($)
can be quite useful on occasions, especially when submitting expressions for evaluation with GHCi, so it’s worth mentioning its existence. And the strict application operator
($!)
is useful for the reasons discussed above.

7.3 Controlling time

We have seen that having an ‘eager’ button on our dashboard is a very simple way of controlling the space involved in driving a computation, but what about time? Unfortunately there is no analogous button for speeding up computations; instead we have to understand some of the things that can unintentionally slow down a computation. The Haskell platform comes with documentation on GHC, which contains useful advice on how to make your program run more quickly. The documentation makes three key points:

  • Make use of GHC’s
    profiling
    tools. There is no substitute for finding out where your program’s time and space is really being used up. We will not discuss profiling in this book, but it is important to mention that such tools are available.
  • The best way to improve a program’s performance is to use a better algorithm. We mentioned this point at the beginning of the chapter.
  • It is far better to use library functions that have been Seriously Tuned by Someone Else, than to craft your own. You might be able to write a better sorting algorithm than the one provided in
    Data.List
    , but it will take you longer than just writing
    import Data.List (sort)
    . This is particularly true when you use GHCi because GHCi loads
    compiled
    versions of the functions in its standard libraries. Compiled functions typically run about an order of magnitude faster than interpreted ones.

Much of the detailed advice in the GHC documentation is beyond the scope of this book, but two tips can be explained here. Firstly, the management of lazy evaluation involves more overheads than eager evaluation, so that if you know that a function’s value will be needed, it is better to push the eager button. As the documentation says: ‘Strict functions are your dear friends’.

The second piece of advice is about types. Firstly,
Int
arithmetic is faster than
Integer
arithmetic because Haskell has to perform more work in handling potentially very large numbers. So, use
Int
rather than
Integer
whenever it is safe to do so. Secondly, there is less housekeeping work for Haskell if you tailor the type of your function to the instance you want. For example, consider the type of
foo1
, defined in
Section 7.1
. There we did not provide a type signature for
foo1
(or indeed for any of the other related functions) and that was a mistake. It turns out that
foo1 :: Integral a => Int -> a

If we are really interested in the sum of the first
n
prime numbers, it is better to declare the type of
foo1
to be (say)
foo1 :: Int -> Integer

With this more specialised definition Haskell does not have to carry around a dictionary of the methods and instances of the type class
Integral
, and that lightens the load.

These pieces of advice can help shave off constant amounts of time and do not affect
asymptotic
time complexity, the order of magnitude of the timing function. But sometimes we can write code that is inadvertently less efficient asymptotically than we intended. Here is an instructive example. Consider the cartesian product function
cp
discussed in
Chapter 5
:

cp [] = [[]]

cp (xs:xss) = [x:ys | x <- xs, ys <- cp xss]

Pretty and clear enough you would think, but compare it with

cp' = foldr op [[]]

where op xs yss = [x:ys | x <- xs, ys <- yss]

The first version is a direct recursive definition, while the second uses
foldr
to encapsulate the pattern of the recursion. The two ‘algorithms’ are the same, aren’t they? Well,
ghci> sum $ map sum $ cp [[1..10] | j <- [1..6]]

33000000

(12.11 secs, 815874256 bytes)

ghci> sum $ map sum $ cp' [[1..10] | j <- [1..6]]

33000000

(4.54 secs, 369640332 bytes)

The expression
sum $ map sum
is there just to force complete evaluation of the cartesian product. Why is the first computation three times slower than the second?

To answer this question, look at the translation that eliminates the list comprehension in the first definition:

cp [] = [[]]

cp (xs:xss) = concat (map f xs)

where f x = [x:ys | ys <- cp xss]

Now we can see that
cp xss
is evaluated
each time
f
is applied to elements of
xs
. That means, in the examples above, that
cp
is evaluated many more times in
the first example than in the second. We cannot be more precise at this point, but will be below when we develop a little calculus for estimating running times. But the issue should be clear enough: the simple recursive definition of
cp
has led us inadvertently into a situation in which more evaluations are carried out than we intended.

One other way to get a more efficient cartesian product is to just write

cp []
= [[]]

cp (xs:xss) = [x:ys | x <- xs, ys <- yss]

where yss = cp xss

This definition has exactly the same efficiency as the one in terms of
foldr
. The lesson here is that innocent-looking list comprehensions can hide the fact that some expressions, though only written once, are evaluated multiple times.

7.4 Analysing time

Given the definition of a function
f
we will write
T
(
f
)(
n
) to denote an asymptotic estimate of the number of reduction steps required to evaluate
f
on an argument of ‘size’
n
in the worst case. Moreover, for reasons explained in a moment, we will assume eager, not lazy, evaluation as the reduction strategy involved in defining
T
.

Other books

Sarah Gabriel by Keeping Kate
Journey of Souls by Michael Newton
Dead Cat Bounce by Nic Bennett
Winter Shadows by Margaret Buffie
Midas Touch by Frankie J. Jones
Just in Time by Rosalind James
You Don't Know Me Like That by ReShonda Tate Billingsley
Secret Shopper by Tanya Taimanglo