Thinking Functionally with Haskell (48 page)

BOOK: Thinking Functionally with Haskell
13.01Mb size Format: txt, pdf, ePub

for proving equations.

Further considerations

It is a basic constraint of our calculator that laws are applied in one direction only, namely from left to right. This is primarily to prevent calculations from looping. If laws could be applied in both directions, then the calculator could oscillate by applying a law in one direction and then immediately applying it in the reverse direction.

Even with a left-to-right rule, some laws can lead to infinite calculations. Typically, these laws are the definitions of recursive functions. For example, consider the definition of
iterate
:
defn iterate: iterate f = cons . fork id (iterate f . f)

This is the definition of
iterate
expressed in point-free form. The functions
cons
and
fork
are defined by
cons (x,xs) = x:xs

fork f g x = (f x,g x)

We have met
fork
before in the exercises in Chapters 4 and 6, except that we wrote
fork (f,g)
instead of
fork f g
. In what follows, all our functions will be curried. The appearance of the term
iterate f
on both sides of the law means that any calculation that can apply the definition of
iterate
once can, potentially, apply it infinitely often. But not necessarily. Here is a calculation (produced by the calculator) that avoids infinite regress:
head . iterate f

=
{defn iterate}

head . cons . fork id (iterate f . f)

=
{head after cons}

fst . fork id (iterate f . f)

=
{fst after fork}

id

The calculation makes use of the two laws:

head after cons: head . cons = fst

fst after fork: first . fork f g = f

The reason non-termination is avoided is that these two laws are given preference over definitions in calculations, a wrinkle that we will elaborate on below.

In order to appreciate just what the calculator can and cannot do, here is another example of rendering a recursive definition into point-free form. Consider the definition of concatenation:
[] ++ ys = ys

(x:xs) ++ ys = x:(xs ++ ys)

We will use
cat
to stand for
(++)
. We will also need
nil
,
cons
and the function
cross (f,g)
, which we will now write as
f * g
. Thus,
(f * g) (x,y) = (f x, g y)

Finally we will need a combinator
assocr
(short for ‘associate-right’), defined by
assocr ((x,y),z) = (x,(y,z))

Here are the translations of the two defining equations of
cat
in point-free form:
cat . (nil * id) = snd

cat . (cons * id) = cons . (id * cat) . assocr

We cannot prove that
cat
is associative with our calculator, for that would involve a proof by induction, but we can state it as a law:
cat associative: cat . (cat * id) = cat . (id * cat) . assocr

Continuing with this example for a bit longer, here are the two bifunctor laws of
(*)
:
bifunctor
: id
id = id

bifunctor
: (f
g) . (h
k) = (f . h)
(g . k)

And here is a law about
assocr
:

assocr law: assocr . ((f
g)
h) = (f
(g
h)) . assocr
Now for the point of the example: our calculator
cannot
perform the following valid calculation:
cat . ((cat . (f
g))
h)

=
{identity law, in backwards direction}

cat . ((cat . (f
g))
(id . h))

=
{bifunctor *, in backwards direction}

cat . (cat * id) . ((f
g)
h)

=
{cat associative}

cat . (id * cat) . assocr . ((f
g)
h)

=
{assoc law}

cat . (id * cat) . (f
(g
h)) . assocr

=
{bifunctor *}

cat . ((id . f) * (cat . (g * h))) . assocr

=
{identity law}

cat . (f * (cat . (g * h))) . assocr

The problem here is that we have to apply the identity and bifunctor laws in
both
directions, and the calculator is simply not up to the task. Observe that the essence of the proof is the simplification of the expression
cat . (id * cat) . assocr . ((f
g)
h)

in two different ways, one by using the associativity of
cat
, written in the form
cat associative: cat . (id
cat) . assocr = cat . (cat
id)
and one by using the
assocr
law. Even if we generalised
calculate
to return a tree of possible calculations, it would not be obvious what expression we would have to start out with in order to achieve the calculation above, so we abandon any attempt to get the calculator to produce it.

It is not just the functor laws that sometimes have to be applied in both directions. For an example, see
Section 12.8
. Sometimes we can get around the problem by stating a law in a more general form than necessary, sometimes by using a hack, and sometimes not at all. As we said at the outset, our calculator is a limited one.

In the scheme of automatic calculation that we are envisaging there are only two degrees of freedom: the choice of which law to apply, and the choice of which subexpression to be changed. The first degree of freedom can be embodied in the order in which laws are presented to the calculator: if two different laws are applicable, then the one earlier in the list is chosen.

Certainly some laws should be tried before others; these are laws that reduce the complexity of intermediate expressions. Good examples are the laws
f.id = f
and
id.f = f
. The naive definition of complexity is that there are fewer compositions on the right than on the left. It is unlikely to be a mistake to apply these
laws as soon as the opportunity arises. Indeed the fact that
id
is the identity element of composition can and will be built into the calculator, so the two identity laws will be taken care of automatically. Similarly, early application of laws like
nil.f = nil
and
map f.nil = nil
(and indeed the two laws used in the calculation about
iterate
), all of which reduce the number of compositions, help to reduce the sizes of intermediate expressions. For the sake of a word, let us call these the
simple
laws.

On the other hand, some laws should be applied only as a last resort. Typically, these laws are definitions, such as the definition of
filter
or
iterate
. For example, in the expression
map f . concat . map (filter p)

we really don’t want to apply the definition of
filter
too early; rather we would prefer to apply the
map
after
concat
law first, and only apply the definition of
filter
later on if and when it becomes necessary. Apart from anything else, intermediate expressions will be shorter.

In summary it looks sensible to sort our laws into the simple laws, followed the nonsimple laws that are not definitions, followed by the definitions. The second degree of freedom is represented by the order in which the subexpressions of a given expression are presented as candidates for instances of laws: if laws are applicable to two different subexpressions, then the subexpression coming earlier in the enumeration is chosen.

That still leaves open the decision whether to give preference to laws or to subexpressions in calculations. Do we start with a subexpression and try every law in turn, or start with a law and see if it applies anywhere? Does it really matter which of these alternatives is chosen? While it is true that, having applied some law at some subexpression, the next law to be applied is likely to be at a ‘nearby’ expression, it is not clear how to formalise this notion of nearness, nor is it clear whether it would contribute significantly to the efficiency of calculations, either in the computation time or in the length of the result.

12.2 Expressions

At the heart of the calculator is the data type
Expr
of expressions. Most of the components of the calculator are concerned with analysing and manipulating expressions in one way or the other. Expressions are built from (function) variables
and constants, using functional composition as the basic combining form. Variables take no arguments, but constants can take any number of arguments, which are themselves expressions. We will suppose all functions are curried and there are no tuples; for example we write
pair f g
instead of
pair (f,g)
. There is no particular reason for avoiding tuples, it is just that most functions we have discussed in the book are curried and we don’t really need both.

To compensate, we will also allow ourselves binary infix operators, writing, for example,
f * g
instead of
cross f g
. Except for functional composition we will not assume any order of precedence or association between binary operators, insisting that expressions involving such operators be fully parenthesised. That still leaves open the question of the precedence of composition. Does
f * g . h
mean
(f * g) . h
or
f * (g . h)
? Haskell puts composition at a high level of precedence and we will adopt the same convention. Thus
f * g . h
will be parsed as
f * (g . h)
. But we will always write such expressions using parentheses to avoid ambiguity.

Here is the proposed BNF grammar for expressions:

expr
::= simple {op simple}
simple
::= term {'.' term}*
term
::= var | con {arg}* | '(' expr ')'
arg
::= var | con | '(' expr ')'
var
::= letter {digit}
con
::= letter letter {letter | digit}*
op
::= {symbol}+

Variable names consist of single letters only, possibly followed by a single digit. Thus
f
and
f1
are legitimate variable names. Constant names are sequences of at least two alphanumeric characters beginning with two letters, such as
map
or
lhs2tex
, while operator names are nonempty sequences of non-alphanumeric symbols, such as
*
and
<+>
. The first line says that an expression is a simple expression, possibly followed by an operator and another simple expression. Simple expressions are compositions of terms. The remaining lines are, we trust, selfexplanatory.

Here is the definition of
Expr
we will use:

newtype Expr
= Compose [Atom] deriving Eq
data Atom
= Var VarName | Con ConName [Expr]

deriving Eq

type VarName = String

type ConName = String

Expressions and atoms are declared to be members of the class
Eq
because we will need to test expressions for equality. Later on we will install expressions as an instance of
Show
for printing them at the terminal.

Here are some examples of expressions and their representations:

f . g . h
=> Compose [Var "f",Var "g",Var "h"]
id
=> Compose []
fst
=> Compose [Con "fst" []]
fst . f
=> Compose [Con "fst" [],Var "f"]
(f * g) . h
=> Compose [Con "*" [Var "f",Var "g"],Var "h"]
f * g . h
=> Compose [Con "*" [Compose [Var "f"],
Compose [Var "g",Var "h"]]]

The fact that composition is an associative operation is built into the design of
Expr
. The particular constant
id
is reserved and will always be interpreted as the identity element of composition.

The parsing combinators described in the previous chapter enable us to parse expressions. Following the BNF, we start with
expr :: Parser Expr

expr = simple >>= rest

where

rest s1 = do {op <- operator;

s2 <- simple;

return (Compose [Con op [s1,s2]])}

<|> return s1

An operator is a sequence of one or more operator symbols, as long as it is neither the composition operator nor an equals sign:
operator :: Parser String

operator = do {op <- token (some (sat symbolic));

Parsing.guard (op
= "." && op
= "=");

return op}

 

symbolic = (`elem` opsymbols)

opsymbols = "!@#$%&*+./<=>?\\^|:-~"

The function
Parsing.guard
is an example of a
qualified
name. The Haskell Prelude also provides a function
guard
, but we want the function of the same name from a module
Parsing
that includes all our parsing functions. A qualified name consists of a module name followed by a period followed by the name of the qualified value.

Other books

Beginner's Luck by Len Levinson
Max and Anna: A Harmless Short by Melissa Schroeder
On the Prowl by T J Michaels
Want To Play by PJ Tracy
Why Is Milk White? by Alexa Coelho
Life Is Not an Accident by Jay Williams
Fascination by Samantha Hunter
Red Jade by Henry Chang