It Began with Babbage (54 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
2.37Mb size Format: txt, pdf, ePub

Compilers were usually written in assembly languages and, thus, was a long, tedious, and potentially error-prone process. Inevitably, compiler writing became an object of inquiry of intrinsic focus. The idea of compiler writing systems—more commonly called, catchily but a bit confusingly,
compiler-compilers
—emerged. Ideally, a compiler-compiler is a programming tool/language that takes as input the syntactic and semantic definition of a programming language and a description of a target machine, and produces as output a compiler for the language–machine combination. In practice, because semantics was hardly ever defined formally (although, as we shall see, inventing meta-languages for defining semantics was yet another subparadigm of this decade), compiler-compilers were largely restricted to the generation of lexical analyzers and parsers from a BNF description of the programming language of concern.

The first working compiler-compiler was developed in 1960 for the Atlas computer by a team at the University of Manchester led by Tony Brooker.
24
The topic of “compiler generation systems” became of sufficient interest that a book on the subject was published in 1970.
25

It is fair to say that research on both compiling algorithms and compiler writing and implementing represented a distinct subparadigm within the overall paradigm, spawning a plethora of research problems in all aspect of compiling: designing/inventing context-free grammars for programming languages that enabled the task of efficient parsers, inventing algorithms for lexical analysis and parsing, and discovering procedures for code generation and code optimization. Each of these broad problem areas generated a variety of solutions. All of this was might be seen as normal science
à la
Kuhn, but it was far from “puzzle-solving” or “mopping up,” as Kuhn would describe normal science.
26

VII

We noted that the technique of microprogramming invented by Maurice Wilkes in 1951 properly came of age during the early 1960s with the design and implementation of the IBM System/360 series of computers (see
Chapter 12
, Sections VII and VIII). The IBM 360 also marked the beginning of the discipline and subparadigm of
computer architecture
.

Recall that John von Neumann's celebrated 1945 report on the EDVAC described what von Neumann called the logical design of the stored program computer (see
Chapter 8
, Section III). The idea was to frame the conceptual structure and organization of what a computer should be like based on the stored-program principle. Specifics of the physical aspects of such a machine were left largely unstated, although possibilities (and von Neumann's own preferences) were discussed.

This logical design became the essence of the new paradigm. It was assimilated by people like Wilkes, Williams, and Turing in England, and von Neumann and his collaborators in America into a common mental schema (see
Chapter 8
, Section V), which was interpreted, elaborated, and refined in different ways by these various researchers as implementations of actual physical computers.

The term computer architecture was not used by von Neumann or any of these other pioneers then and for years later, but we are left in no doubt that what would later be called computer architecture was anticipated by von Neumann when he laid emphasis on the logical design of computers.

It was one of several achievements of the designers of the IBM System/360 (in particular, Gene Amdahl [1922–], Frederick Brooks [1931–], and Gerrit Blaauw [1924–]) to first use the term “architecture” in the context of computers. Their original meaning was the collection of
functional
attributes of a computer, as seen by the programmer—that is, the conceptual structure and functional behavior of the computer as distinct from its internal organization and physical implementation—the outer façade of the computer, as it were.
27

The IBM System/360 was not a specific physical computer. Rather,
System/360
referred to an architecture: a liminal artifact. Actual implementation of this architecture would entail the creation of any one of a range or
family
of physical computers, the System/360 family. Each resulting machine was a material computational artifact (called a model in IBM parlance, each identified by a model number), with its own cost–performance characteristic, depending on the physical technology and specific internal organization and design. To put this in another way, distinct physical machines each with its own physical, organizational, and technological characteristics could serve as
hosts
to implement the same System/360 architecture. As it happened, the efficacy of this approach was ensured by microprogramming the different hosts so that they each
appeared
as a System/360 “virtual” machine. This technique was called
emulation
28
—a term first introduced in this context in 1965
29
(
Figure 15.3
).

FIGURE
15.3 IBM System/360 Emulation Schema.

To the user (the programmer), these different physical computers were functionally identical. This meant that a program written for the “virtual” System/360 “computer” could be executed on any of the physical machines; or, a program executing on one of the models (say, Model 40) could be transported to another model (say Model 67). The major difference would lie in the performance of the computers; a higher numbered model had better performance (speed of processing) than a lower numbered model. Naturally, the faster machine was more costly than the slower one. This scheme allowed owners of System/360 physical machines to “upgrade” their systems to higher performing models if so desired without any substantial changes in the programs that would be transported from the old to the new.

The System/360 designers thus introduced the concept of computer architecture during the early to mid 1960s. The concept would evolve and become more complex in the decade that followed. A discipline of computer architecture as a branch of computer science would be established, also in the 1970s, and the first of a (still continuing) series of annual international conferences on computer architecture was launched in 1974. The first anthology of papers on the subject was published in 1972,
30
but the seeds of this subparadigm were planted during the 1960s.
31

VIII

Among the ways in which a science assumes its identity is by way of publications of definitive
books
proclaiming what the subject matter is about at some moment in time. Of special status are those books that are seminal—the first textbook in a particular subject or a text that stamps its authority on the field. They become indispensable to those who study or work in that field.

In 1968 to 1969, Donald Knuth (1938–), professor of computer science at Stanford University, published the first two volumes of a planned series called, collectively,
The Art of Computer Programming
. The first volume was titled
Fundamental Algorithms
(1968)
32
; the second,
Seminumerical Algorithms
(1969).
33
These works became “immediate classics.” As sometimes happens, that texts come to be referred by a generally accepted abbreviation of some sort, Knuth's first two volumes of
The Art of Computer Programming
were simply referred to as ‘Knuth Volume 1' and ‘Knuth Volume 2', respectively.
34

Knuth was only 30 when
Fundamental Algorithms
was published. Both before and after these two books, he had and would have many significant contributions to different subparadigms in computer science—in the realms of programming languages, programming style and methodology, algorithms, the history of programming and programming languages, and philosophical aspects of computer science. During the 1980s, he made pioneering contributions to the development of computerized typography.
35
However, it is his
Art of Computer Programming
for which he has been most recognized.

These books were about
algorithms
. The concept of an algorithm (if not the term), of course, reaches back to antiquity. Euclid's great work
Elements
described an algorithm to find the greatest common divisor of two numbers. As Knuth tells us, the word “algorithm” itself originated in the name of a ninth-century Arabic mathematics textbook writer al-Khowârazmi as “algorism.”
36
The earliest reference the
Oxford English Dictionary
has for
algorithm
was in 1695 in an article in the
Philosophical Transactions of the Royal Society
.
37

As we have seen, ever since Charles Babbage and Ada Lovelace, people have been inventing algorithms not for humans to perform, but suitable for automatic execution on computing machines. Mathematicians and logicians had concerned themselves with algorithms under the term
effective procedure
—“a set of rules which tells us how to behave,” as another significant text of the late 1960s put it.
38
In fact, Turing, in his celebrated paper of 1936, proposed that any procedure that seems intuitively effective can be carried out by the kind of machine he constructed—Turing machines (see
Chapter 4
, Section IV). In 1950, Soviet mathematician Andrei A. Markov (1903–1979) published a book that, in English translation, was titled
The Theory of Algorithms
; its concern was to inquire whether effective procedures did or did not exist for certain kinds of problems.
39

Knuth's concern was far more practical. His goal was to present to his readers what was known, circa 1968 to 1969, about algorithms for “real” computers. Toward this end, cautioning that algorithms were more than procedures or methods or recipes, he laid out the fundamental characteristics of algorithms:

1.
Finiteness
: An algorithm is “finite”; it always terminates (comes to a halt) after a finite number of steps.

2.
Definiteness
: Every step of an algorithm must be defined precisely and unambiguously.

3.
Input and output
: An algorithm must have one or more inputs and one or more outputs.

4.
Effectiveness
: Each of the operations to be performed as part of an algorithm must be basic enough that, in principle, it can be performed exactly by a human being using pencil and paper.
40

In fact, strangely enough, even though from Babbage's time people had been inventing algorithms for automatic digital computers, even in Knuth's time there was uncertainty and ambiguity about what an algorithm was and its relation to programming. Thus, in a letter to the editor of
Communications of the ACM
in 1966, Knuth made the point that the word
algorithm
refers to an abstract method of computing whereas a
program
expresses an algorithm in some particular programming language. Thus, the same algorithm can be specified as different programs
41
—hence the distinction, made in this story, between algorithms as abstract artifacts and programs as liminal artifacts.

In
The Art of Computer Programming
, Knuth used several modes of describing algorithms: as sequences of steps in which the steps themselves were in a hybrid of English and mathematical notation, as flowcharts, and as programs in a “toy” assembly language of his own invention. If Wheeler's contributions to the EDSAC project highlighted the interplay of material and liminal computational artifacts (see
Chapter 8
, Section VII), Knuth's
Art of Computer Programming
emphasized that the algorithm as an abstract artifact is not an island of its own; it is inevitably conjoined with a corresponding liminal artifact—the program into which the algorithm must be transformed for it to be
practically
effective.

IX

But Knuth was not only concerned with the design and programming of algorithms. He ushered in a new aspect of algorithms that he called
analysis of algorithms
(or algorithmic analysis), which was concerned with the efficiency of algorithms, how well they performed. The idea was to determine the “average behavior” or “average performance” of an algorithm and, if possible, the “optimality” of the algorithm (in some precise sense).
42
This would facilitate the comparison of two or more algorithms for the same problem and deciding which was the best. With this in mind, Knuth introduced the “big O” notation, invented in 1892 by German mathematician Paul Bachmann (1837–1920).

Suppose an algorithm has been designed to parse sentences of length
n
symbols in some language, and suppose it is calculated that the maximum number of steps the parsing algorithm will take is
proportional
to
n
2
. Thus, one can say that the worst-case behavior of the algorithm is Ο(
n
2
), read as
order n
2
. If another algorithm for the same problem takes an amount of time proportional to ¼
n
2
+
n
, this is also an O(
n
2
) algorithm. In other words, because the time required to execute both the algorithms will be dominated by the
n
2
term, they are both of the same
complexity—
that is, O(
n
2
) algorithms. However, if a third algorithm is invented that requires time proportional to
n
log
n
, this is an O(
n
log
n
) algorithm, and because as
n
increases,
n
2
“grows” more quickly than
n
log
n
, the O(
n
log
n
) algorithm is more efficient than the O(
n
2
) algorithm, because the time required to execute the former algorithm will increase at a slower rate with the increase in the length (
n
) of the sentence than the latter algorithm.

Other books

The Glitter Scene by Monika Fagerholm
Wish You Happy Forever by Jenny Bowen
A Story of Now by O'Beirne, Emily
Piece of Cake by Derek Robinson
Plague of Angels by Kennedy, John Patrick
Ghost of a Chance by Bill Crider
Identity Crisis by Bill Kitson
Eightball Boogie by Declan Burke