It Began with Babbage (55 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
9.95Mb size Format: txt, pdf, ePub

The term
computational complexity
was also used to signify these kinds of performance characteristics of algorithms.
43
Studies of the computational complexity of abstract computing machines (such as Turing machines) or algorithms that could recognize certain types of formal languages (such context-free languages) were first published in 1965.
44
These studies were largely concerned with the performance
limits
of abstract machines on certain kinds of computations. It seems fair to say that, with the publication of Knuth's
Fundamental Algorithms
, computational complexity entered the domain of algorithms designed for “real” machines. Anyone who designed or invented an algorithm would be obligated to perform an analysis of the algorithm and estimate its complexity using the big O notation.

Knuth's achievements in writing
The Art of Computer Programming
were many. Perhaps, at the most fundamental level, was his demonstration of the relationship between mathematics and practical programming via algorithm design and analysis. Mathematical fields—abstract algebra, probability theory, statistics, and the theory of numbers—became, in Knuth's books, essential mathematical
tools
for the computer scientist interested in the design and analysis of algorithms. We are reminded of how, almost 300 years before, Sir Isaac Newton (1647–1727) published his great work in physics (or natural philosophy, as it was then called),
Philosophae Naturalis Principia Mathematica
(1687; known simply by posterity as
Principia
) in which he demonstrated how mathematical analysis becomes the tool for explaining physical phenomena. Knuth's
The Art of Computer Programming
stands in the same relation to the design of algorithms (and programs) as Newton's
Principia
did to physics.

X

According to the online
Oxford English Dictionary
, the term
software
entered the culture of computing in an article in the June 1960 issue of
Communications of the ACM
. The article referred to “such software as COBOL.” A year later an article in the British Computer Society's
Computer Bulletin
, in June 1961, spoke of “The programming expertise or ‘software' that is at the disposal of the computer user comprising expert advice on all matters of machine code programming.…”
45

So although the first article alluded to a language (and, implicitly, its compiler) as software, the second referred to “expertise” that was available to the user. By the mid 1960s a more precise meaning of the term was in place. The British newspaper
The Observer
of December 13, 1964, referred to software as “the ‘supervisory programme,' the complex instructions which enable the machine to handle many tasks simultaneously.”
46
The
New Scientist
of August 25, 1966, spoke of “‘software'—the programmes for operating the computer on a wide range of problems.”
47

The
Oxford English Dictionary
gives the modern meaning of software as

1. The programs and procedures required to enable a computer to perform a specific task, as opposed to the physical component of the system.

2.
esp
. The body of system programs, including computers and library routines of a particular computer and often provided by the manufacturer, as opposed to program material provided by a user for a special task.
48

The significance of the word itself should not be overlooked:
soft
ware as opposed to
hard
ware. The physical computer was “hard”—materialistic, subject ultimately to physical laws. A program was “soft”—comprising symbols, indifferent to physical laws, changeable, plastic, with laws all their own. Moreover, this new dichotomy of hardware/software suggested, by the mid 1960s, a
new order
in computer culture. The physical computer was no longer the epicenter of computing with all else as peripheral. Rather, there were two classes of artifacts, the physical and the symbolic—the material and the liminal—and they were
equal
partners. Computing entailed a bilateral symbiotic relationship; the totality constituted a computer
system
.

Moreover, there was the increasingly transparent issue of the largeness, the complexity of system software—the “body of system programs” mentioned in the dictionary definition. Programming languages and their compilers constituted one major type of system software. And more or less at the same time as
that
class of software came into existence during the mid 1950s, another class made its appearance. And in just more than a decade, the size and complexity of
this
class of software far exceeded those of any other kind of software.

XI

Ever since David Wheeler and the EDSAC group in Cambridge conceived and implemented subroutine libraries and the first assembler, computer systems designers and researchers had sought to protect, as far as possible, the “user” from the gritty realities of the physical computer—thus, high-level programming languages and their compilers. The user was offered not so much an IBM, a UNIVAC, or an English Electric computer, but a FORTRAN, an Algol, a COBOL, or a LISP machine.

The system programmer's ambition was to create a grand illusion. The tedium of programming, the input and output processes of a computation; the intricacies of organizing a program to fit into available physical memory; managing the movement of program
segments from “secondary memories” such as tapes, disks, drums, or punched cards (these being the common hardware for storing large files of program data by the end of the 1950s) into the computer's main memory; the user's frustration at having to share access to a physical computer over time—the system programmer's goal was to free the user from all such onerous responsibilities by creating the illusion that none of this mattered or was necessary; or, rather, it would be the system that would take these responsibilities on itself.

However, the system programmer was not driven by altruism alone. A computer was a machine, an expensive resource; a computer center was an information processing factory. Ever since the Industrial Revolution during the 18th century, those who owned, managed, and ran factories were obsessed with maximizing the efficiency of their machines, achieving the greatest possible output with the least possible cost. And this meant, among other things, maximizing the
use
of machines. Machines that lay idle, machines that ran below their capacity struck a chill in the bones of the factory owner.

This factory ethic would inevitably infiltrate computer culture. Users' programs that solved systems of differential equations or calculated and produced paychecks came to be called “jobs” that a computer center had to do. And in the best factory tradition, rather than process these jobs piecemeal, one by one, as they “arrived” in the computer center, jobs were collected into batches and processed in one continuous sequence until the entire batch of jobs had been processed: “batch processing.” The “overhead” in preparing the computer installation to process jobs could thus be reduced.

A computer system as a factory machine posed other problems. The two main types of activities were input/output and actual computation. The latter was done by the “central processing unit” and, by the beginning of the 1960s, the former was performed by input/output devices under command of the program. The former was done at electronic switching speeds; the latter, involving moving devices—card readers and punches, printers, magnetic tape units, disks, and drums—required far more time. And so while a program performed its input/output transfers (such as large files of data) between the computer and input/output devices, the central processing unit could do nothing but, metaphorically, twiddle its thumbs. Thus, specialized “input/output processors” (called “channels” in IBM terminology
49
) were developed that could work independently of the central processor. This led to the possibility that while one program was executing its input/output operations via the specialized input/output processors, the central processor might be set to perform actual computations of another program. This would reduce the central processor's “idle time” and increase its productivity. But, this meant that, at any one time, the main memory would have to be occupied by more than one program, and the central processor would be shared between the programs. Program
P
1
has control of the central processor, then requires to perform input/output, issues input/output commands to the specialized processor, releases the central processor, which is then possessed by another program
P
2
occupying main memory. When the input/output is complete,
P1
may “interrupt” the execution of
P
2
and control of the central processor is
given back to
P
1
, or
P
2
may continue to execute until
it
requires to do input/output, at which time it issues the appropriate commands to the input/output processors and then relinquishes control of the central processor to
P
1
. At any given time,
several
programs, sharing main memory, may take turns to take control of the central processor. This was the concept of
multiprogramming
.

It is not clear which individual or group actually
conceived
the idea of multiprogramming. It was clearly “in the air” circa 1962 to 1964. IBM implemented a form of multiprogramming on its 7094 machine (1962),
50
Manchester University's Atlas computer system apparently supported multiprogramming by January 1964, and IBM's System/360 series supported multiprogramming, calling it
multitasking
.
51

At about the same time, a new mode of input/output came into existence. The users were no longer actually denied direct access to a computer installation—a privilege that had been taken away from them with the advent of batch processing. Rather, they would sit at a “remote terminal”—a teletype reader/printer—possibly far removed physically from where the computer center was located, and interact with the computer on a “real-time” basis. This meant that, at any time, multiple users could have what seemed to be “simultaneous” access to the computer system. This, too, was an illusion. The user
thought
that he or she had exclusive access to the machine. This illusion was created by a combination of software and hardware that enabled the system's resources to be
time-shared
between multiple users, with each user given a time quantum to actually access the system before it was passed to another for a time quantum, and so on. However, switching between users happened at electronic speeds, hence the user's illusion. But all this must happen automatically; there must be minimal human intervention. The effectiveness of the grand illusion demanded that the illusion be created by the computing system itself.

Compilers were, of course, one class of such system programs. The other major class of system software that emerged from the mid 1950s went by various names: “monitor”, “supervisor”, “executive”. Eventually, the term that came to be more or less generally accepted was
operating system
. An example of a basic form of an operating system was implemented for the IBM 709 circa 1958. The machine came with input/output subroutines and a monitor that managed the execution of FORTRAN jobs.
52
However, it was during the 1960s that the operating system properly came of age—an age at which it manifested itself in full, lush, huge, complex glory. And with its emergence was created a plethora of research problems and a gathering of new subparadigms.

How did this happen?

XII

The size of a computer's main memory obsessed computer designers and frustrated computer users. Even though main memory grew spectacularly in capacity from the comically small (from a present-centered perspective) size of the EDSAC memory (1024 or “1K”
locations) to what commercial machines had to offer during the mid 1960s—the IBM general-purpose System/360 series of machines announced in 1964 had up to 524,288 (512K) bytes (8-bit addressable locations) of main memory
53
—it never seemed enough, thus the need for a much larger, cheaper (but considerably slower) “secondary memory” (or “backup storage”) to be attached to the computer. A “memory hierarchy” was thus created involving the relatively faster, smaller main memory and the relatively slower, larger secondary memory. Thus was created the burden of managing this memory hierarchy so that when parts of a program and/or its data were necessary for execution, they would be transferred from secondary to main memory. This came to be called the
memory management problem
.

In 1962, a group at the University of Manchester led by Tom Kilburn published an article proposing a solution to this problem. They called their idea “one-level store.”
54
Their idea was an instance par excellence of the grand illusion. In their case, it was to give users the illusion of (practically) unlimited memory space. This idea came to be called
virtual memory
.
55

Kilburn and colleagues implemented this idea on the Atlas computer, designed and built at the University of Manchester in collaboration with Ferranti, their corporate partner on the Mark I (see
Chapter 8
, Section XIII). Manchester University had never ceased to build experimental computers since their Mark I endeavor. Atlas was the fourth of the Manchester computers
56
(later, two more would emerge). Atlas was commissioned in December 1962 and was considered to be “the most powerful computer in the world” at the time.
57
It offered the user a virtual memory of one million 48-bit words, even though the physical main memory comprised only 16,384 (16K) words.
58

Other books

Locke and Load by Donna Michaels
Love and Larceny by Regina Scott
Random Acts of Hope by Julia Kent
Here's to Forever by Teagan Hunter
Love in Lowercase by Francesc Miralles