It Began with Babbage (19 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
8.02Mb size Format: txt, pdf, ePub

72
. H. Shreyer. (1939).
Technical computing machines
. Unpublished memorandum. Published in Randell (pp. 167–169), op cit., p. 168.

73
. Zuse, op cit., p. 178.

74
. Randell, op cit., p. 156.

75
. Zuse, op cit., p. 179.

76
. Ibid.

77
. Ibid. This article was written in 1962 in German. My reference is to the excerpted English translation in which such terms as
program, word, exponent, mantissa
, and
single address code
—very much established during the 1960s—were used.

78
. Ibid.

79
. Ibid.

80
. Ibid.

81
. Ibid.

82
. Ibid., p. 180.

83
. Ibid., pp. 183–184.

84
. B. Randell. (1980). The Colossus. In N. Metropolis, J. S. Howlett & G. C. Rota. (Eds.),
A history of computing in the twentieth century
(pp. 47–92). New York: Academic Press (see especially p. 55).

85
. Ibid., op cit., p. 78.

86
. A. Hodges. (1987).
Alan Turing: The Enigma
(pp. 166–170). New York: Simon & Schuster.

87
. Randell, 1980, op cit., p. 71.

88
. Ibid., pp. 47, 65.

89
. I.. J. Good. (1980). Pioneering work on computers at Bletchley. In Metropolis, Howlett, & Rota, op cit., pp. 31–45 (original work published 1976).

90
. Randell, 1980, op cit., p. 71.

91
. Ibid., p. 74.

92
. Good, op cit., p. 42; Randell, 1980, op cit., p. 66.

93
. Randell, 1980, op cit., p. 72.

94
. Ibid., p. 73.

95
. Ibid., p. 74.

96
. Ibid., p. 47.

97
. Ibid., p. 80.

98
. Hodges, op cit., p. 305.

99
. Quoted in Hodges, op cit., p. 306.

100
. J. H. Wilkinson. (1980). Turing's work at the National Physical Laboratory and the construction of pilot ACE, DEUCE and ACE. In Metroplois, Howlett, & Rota, op cit., pp. 101–114.

6
Intermezzo
I

BY THE END OF
World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even an actuary, some working in solitary mode, some in twos or threes, others in small teams, some backed by corporations, others by governments, many driven by the imperative of war—had developed a shadowy shape of what the elusive Holy Grail of automatic computing might look like. They may not have been able to define
a priori
the nature of this entity, but they were beginning to grasp how they might recognize it when they saw it. Which brings us to the nature of a
computational paradigm
.

II

Ever since the historian and philosopher of science Thomas Kuhn (1922–1996) published
The Structure of Scientific Revolutions
(1962), we have all become ultraconscious of the concept and significance of the
paradigm
, not just in the scientific context (with which Kuhn was concerned), but in all intellectual and cultural discourse.
1

A paradigm is a complex network of theories, models, procedures and practices, exemplars, and philosophical assumptions and values that establishes a framework within which scientists in a given field identify and solve problems. A paradigm, in effect, defines a
community
of scientists; it determines their shared working culture as scientists in a branch of science and a shared mentality. A hallmark of a mature science, according to Kuhn, is the emergence of a
dominant
paradigm to which a majority of scientists in that
field of science adhere and broadly, although not necessarily in detail, agree on. In particular, they agree on the fundamental philosophical assumptions and values that oversee the science in question; its methods of experimental and analytical inquiry; and its major theories, laws, and principles. A scientist “grows up” inside a paradigm, beginning from his earliest formal training in a science in high school, through undergraduate and graduate schools, through doctoral work into postdoctoral days. Scientists nurtured within and by a paradigm more or less speak the same language, understand the same terms, and read the same texts (which codify the paradigm).

However, rather like a nation's constitution, a paradigm is never complete or entirely unambiguous. There are gaps of ignorance within it that need to be filled—clarifications, interpretations, and unknowns that must be known, and open problems that must be solved. These are the bread-and-butter activities of most practitioners of that science. Kuhn called the sum of these activities
normal science
. In doing normal science, the paradigm as a whole is never called into question; rather, its details are articulated.

We will see, as our story unfolds, that there is much more to Kuhn's theory of paradigms and how it can explain scientific change. We also note that Kuhn's theory has been explored widely and criticized severely.
2
But here, rather as he had postulated paradigms as frameworks for doing science, we can use his theory of paradigms as a framework for interpreting history, to lend some shape to this unfolding history of computer science.

Let us consider, for our immediate purpose, one of his key historical insights. This is the situation in which a paradigm has yet to emerge within a discipline. The absence of a paradigm—the
preparadigmatic
stage—marks a science that is still immature and perhaps even marks uncertainty that it
is
a science. In this condition, there might exist several “competing schools and subschools of thought.”
3
They vie with one another, with each school having its own fierce adherents. They may agree on certain aspects of their burgeoning discipline, but they disagree on other vital aspects. In fact, according to Kuhn, leaving aside such fields as mathematics and astronomy, in which the first paradigms reach back to antiquity, this situation is fairly typical in the sciences.
4

And, in the absence of a shared framework, in the absence of a paradigm, anything goes. Every fact or observation gleaned by the practitioners of an immature science seem relevant, perhaps even equally significant.

III

This was the situation in computing circa 1945. No one had yet ventured to speak of a science of computing, let alone something as precise as a disciplinary name such as computer science. As we have seen, even the word
computer
was not yet widely in place to signify the machine rather than the person. For a science of computing to be spoken of, there had to be some semblance of a paradigm to which the current, few dozen practitioners of the field could pay allegiance. There was no solid evidence of a paradigm—yet.
On the other hand, certain elements had emerged as common ground—in fact, some reaching back to Babbage himself.

First, the central focus of all the protagonists in this story so far, beginning with Babbage, was a machine to perform automatic computation: a computational artifact (see Prologue). This artifact was basically a material one, and so the physical technology was always at the forefront in the minds of the people involved. Yet (again, beginning with Babbage and his sometime collaborator Lovelace), the material artifact was not an island of its own. Unlike almost all material artifacts that had ever been invented and built before, there was an intellectual activity involved in preparing a problem to be solved by automatic computing machines. As yet, there was no agreed-on name for this activity or its product. The term
program
was still some way off.

Second, a fundamental organization of an automatic computing machine—its
internal architecture
—had been clarified: there must be a means of providing the machine with information, and a means by which the results of computation could be communicated to the user—input and output devices. There must be a store or memory to hold the information to be used in a computation or the results of computation. There must be an arithmetic unit that can actually carry out the computations. Even the possibility of parallel processing—using two more arithmetic units, even multiple input and output devices—was “in the air.” There was also the possibility of specialized units for specific kinds of mathematical operations such as multiplication and the extraction of square roots, or for operations to “look up” mathematical tables. There must be a means for controlling the execution of a computational task and a means for specifying what the computational task is to be.

Third, the distinction between special-purpose and general-purpose computers was rather vague. The machines that had been conceived or actually built and used thus far were designed to perform specific kinds of computational tasks (some very specific, some spanning a range of problems within a problem class). The dominant class of problems for which computing machines were developed, up to this point, were mathematical or, at least, numeric. The Colossus, in contrast, was specialized toward the class of logical (or, equivalently, Boolean) problems. A general-purpose machine must provide capabilities to process tasks spanning different classes of problems. This means that the physical machine itself must provide the means for the efficient execution of these different tasks. Such capability was yet lacking.

Fourth, as noted earlier, the words
programmable
and
computer program
had yet to emerge. The terms still in common use circa 1945 were “paper tape controlled” or “plugboard controlled”. Zuse, as we have seen, used the term “computational plan”, which is perhaps closest to
program
. Aiken and Hopper spoke of “sequence tape”. But, the
idea
of programmability, reaching back to Babbage and Lovelace, was, circa 1945, a shared concept.

Fifth, and last, certain other terms had emerged to form the nucleus of a computing vocabulary: “floating-point representation”, “binary”, and “binary coded decimal” in the
context of numbers. Another was “register” to signify the individual units of information storage, linked either directly with arithmetic units or as collections to serve as the machine's memory.

This much seemed to be agreed on. However, there were different opinions and views on other fundamental matters. How should numbers be represented? Some had come to appreciate the advantages of binary notation whereas others clung to the familiar decimal system. How large should the unit of information storage (in present-centered language, the
word size
) be? What should be the form of the computational plan?

Then there was the matter of the physical technology of computers. Purely mechanical technology—gears, levers, cams, sprocket and chain, the stuff of kinematics, the domain of mechanical engineering—still prevailed, but had also given way to the guile of electrical technology. Electrical relays and electromagnets had become the preferred and trusted physical basis for building computing machines. There was even an elegant mathematics—Boolean algebra—that could be applied to the design of binary switching circuits out of which electrical components would be made.

However, World War II raged on, and the imperatives of faster means of computation became more urgent, the lure of
electronic
circuit elements became increasingly more attractive. On August 14, 1945, 5 days after America exploded its second atomic bomb upon Nagasaki, the Japanese surrendered. The Germans had already surrendered in May. World War II was finally over. The state of computing was scarcely in the minds of anyone in the world, save for a few dozen of those who were involved in its development before and during the war years—in America, Britain, and Germany. But for these few people, the state of computing and computing machines
mattered
. In the light of a Kuhnian framework, the situation, however, was very much in a preparadigmatic state.

IV

An aspect of this preparadigmatic state included the larger theoretical questions: What kind of discipline was computing? Was it a discipline at all?

We noted at the beginning of this book that scientists, as a community, agree implicitly and broadly that what makes a discipline scientific is, above all, its methodology (see Prologue, Section V)—the use of observation, experimentation, and reasoning; a critical stance; and an ever-preparedness to treat explanations (in the form of hypotheses, theories, laws, or models) as tentative and to discard or revise them if the evidence demands this.

In the artificial sciences, explanations are about artifacts, not nature. Here, scientists address the question whether such and such an artifact satisfies its intended purpose (see Prologue, Section III). We also noted that, in the case of artifacts of any reasonable complexity, design and implementation are activities that lie at the heart of the relevant artificial sciences, activities missing in the natural sciences. Designs serve as theories about a
particular artifact (or class of artifacts), and implementations serve as experiments to test the validity of the theories (see Prologue, Section VI).

We have observed thus far in this story the emergence of several of these features. From as far back as Charles Babbage, we see, for example, the separation of design from implementation. In fact, in Babbage's case, it was all design, never implementation. It was left to others to implement the Difference Engine and to test Babbage's theory. The Analytical Engine was detailed in theory, but the theory was never tested.

With the advent of electromechanical machines, we observe the strongly empirical/experimental flavor of computing research. The families of machines, whether at Bell Laboratories, IBM, Bletchley Park, or Zuse's workplace in Germany, reveal the emphasis on building individual, specific machines; ascertaining their appropriateness for their intended purposes; revising or modifying the design in the light of their performance; or even creating a new design because of changes in purposes and new environmental factors (such as the availability of new technologies).

Other books

Stealing Jake by Pam Hillman
La Cosecha del Centauro by Eduardo Gallego y Guillem Sánchez
Skyport Virgo 1 - Refuge by Lolita Lopez
The Vintage Teacup Club by Vanessa Greene
Hope House by Tracy L Carbone
Losing It by Emma Rathbone
The Baby Track by Barbara Boswell
Sixteen by Rachelle, Emily