The Act of Creation (32 page)

Read The Act of Creation Online

Authors: Arthur Koestler

BOOK: The Act of Creation
13.5Mb size Format: txt, pdf, ePub
The martyrology of science mentions only a few conspicuous cases which
ended in public tragedies. Robert Mayer, co-discoverer of the principle
of the Conservation of Energy, went insane because of lack of recognition
for his work. So did Ignaz Semmelweiss, who discovered, in 1847, that the
cause of childbed fever was infection of the patient with the 'cadaveric
material' which surgeons and students carried on their hands. As an
assistant at the General Hospital in Vienna, Semmelweiss introduced the
strict rule of washing hands in chlorinated lime water before entering
the ward. Before this innovation, one out of every eight women in the
ward had died of puerperal fever; immediately afterwards mortality fell
to one in thirty, and the next year to one in a hundred. Semmelweiss's
reward was to be hounded out of Vienna by the medical profession --
which was moved, apart from stupidity, by resentment of the suggestion
that they might be carrying death on their hands. He went to Budapest,
but made little headway with his doctrine, denounced his opponents as
murderers, became raving mad, was put into a restraining jacket, and
died in a mental hospital.
Apart from a few lurid cases of this kind we have no record of the
countless lesser tragedies, no statistics on the numbers of lives wasted
in frustration and despair, of discoveries which passed unnoticed. The
history of science has its Pantheon of celebrated revolutionaries -- and
its catacombs, where the unsuccessful rebels lie, anonymous and forgotten.
Limits of Confirmation
From the days of Greece to the present, that history echoes with the
sound and fury of passionate controversies. This fact in itself is
sufficient proof that the same 'bundle of data', and even the same
'crucial experiment', can be interpreted in more than one way.
To mention only a few of the more recent among these historic
controversies: the cosmology of Tycho de Brahe explained the facts, as
they were known at the time, just as well as the system of Copernicus. In
the dispute between Galileo and the Jesuit Father Sarsi on the nature of
comets, we now know that both were wrong, and that Galileo was more wrong
than his forgotten opponent. Newton upheld a corpusculary, Huyghens a
wave-theory of light. In certain types of experiment the evidence favoured
Newton, in other types Huyghens; at present we tend to believe that both
are true. Leibniz derided gravity and accused Newton of introducting
'occult qualities and miracles' into science. The theories of Kekulé
and Van't Hoff on the structure of organic molecules were denounced by
leading authorities of the period as a 'tissue of fancies.' [14] Liebig
and Wöhler -- who had synthesized urea from anorganic materials
-- were among the greatest chemists of the nineteenth century; but
they poured scorn on those of their colleagues who maintained that the
yeast which caused alcoholic fermentation consisted of living cellular
organisms. They even went so far as to publish, in 1839, an elaborate skit
in the
Annalen der Chemie
, in which yeast was described 'with a
considerable degree of anatomical realism, as consisting of eggs which
developed into minute animals shaped like distilling apparatus. These
creatures took in sugar as food and digested it into carbonic acid and
alcohol, which were separately excreted.' [15] The great controversy
on fermentation lasted nearly forty years, and overlapped with the even
more passionate dispute on 'spontaneous generation' -- the question
whether living organisms could be created out of dead matter. In both
Pasteur figured prominently; and in both controversies the philosophical
preconceptions of 'vitalists' opposed to 'mechanists' played a decisive
part in designing and interpreting the experiments -- most of which were
inconclusive and could be interpreted either way.
I have compared the nineteenth century to a majestic river-delta, the
great confluence of previously separate branches of knowledge. This was
the reason for its optimism -- and its hubris; the general convergence of
the various sciences created the conviction that within the foreseeable
future the whole world, including the mind of man, would be 'reducible'
to a few basic mechanical laws. Yet as we enter our present century, we
find that in spite of this great process of unification, virtually every
main province of science is torn by even deeper controversies than before.
Thus, for instance, the most exact of the exact sciences has been split,
for the last twenty years, into two camps: those who assert (with Bohr,
Heisenberg, von Neumann) that strict physical causality must be replaced
by statistical probability because subatomic events are indeterminate
and unpredictable; and those who assert (with Einstein, Planck, Bohm,
and Vigier) that there is order hidden beneath the apparent disorder;
governed by as yet undiscovered laws, because they 'cannot believe that
God plays with dice'. Another controversy opposes the upholders of the
'big-bang theory', according to which the universe originated in the
explosion of a single, densely packed mass some thirty thousand million
years ago and has been expanding ever since -- and the upholders of the
'steady-state theory', according to which matter is continually bring
created in a stable cosmos. In genetics, the neo-Darwinian orthodoxy
maintains that evolution is the result of chance mutations, against the
neo-Lamarckian heretics, who maintain that evolution is not a dice-game
either -- that some of the improvements due to adaptive effort can be
transmitted by heredity to successive generations. In neuro-physiology,
one school maintains that there is rigid localization of functions in
the brain, another, that the brain works in a more flexible manner. In
mathematics, 'intuitionists' are aligned against 'formalists'; in the
medical profession, opinions are divided regarding the psychological or
somatological origin of a great number of diseases; therapeutic methods
vary accordingly, and each school is subdivided into factions.
Some of these controversies were decided by cumulative evidence in favour
of one of the competing theories. In other cases the contradiction between
thesis and antithesis was resolved in a synthesis of a higher order. But
what we call 'scientific evidence' can never confirm that a theory is
true
; it can only confirm that it is
more true
than another.
I have repeatedly emphasized this point -- not in order to run
down science, but to run down the imaginary barrier which separates
'science' from 'art' in the contemporary mind. The main obstacle which
prevents us from seeing that the two domains form a single continuum
is the belief that the scientist, unlike the artist, is in a position
to attain to 'objective truth' by submitting theories to experimental
tests. In fact, as I have said before, experimental evidence can confirm
certain expectations based on a theory, but it cannot confirm the theory
itself. The astronomers of Babylon were able to make astonishingly precise
predictions: they calculated the length of the year with a deviation of
only 0.001 per cent from the correct value; their figures relating to
the motions of sun and moon, which form a continuous record starting
with the reign of Nabonasser 747 B.C., were the foundation on which
the Ptolemaic, and later the Copernican, systems were built. Theirs was
certainly an exact science, and it 'worked'; but that does not prove the
truth of their theories, which asserted that the planets were gods whose
motions had a direct influence on the health of men and the fortunes of
states. Columbus put his theories to a rather remarkable experimental
test; what did the evidence prove? He and his contemporaries navigated
with the aid of planetary tables, computed by astronomers who thought
the planets ran on circles, knew nothing of gravity and elliptic orbits,
yet the theory worked -- though they had the wrong idea
why
it worked. Time and again new drugs against various diseases were
tried in hospital wards, and improvement in the patients' condition was
considered experimental evidence for the efficacity of the drug; until
the use of dummy pills indicated that other explanations were equally
valid. Eysenck has questioned the value of psychotherapy in general, by
suggesting that the statistical evidence for successful cures should be
reinterpreted in the light of the corresponding numbers of spontaneous
recoveries of untreated patients. His conclusions may be quite wrong;
but his method of argument has many honourable precedents in the history
of science. To quote Polànyi:
For many prehistoric centuries the theories embodied in magic and
witchcraft appeared to be strikingly confirmed by events in the eyes
of those who beheved in magic and witchcraft. . . . The destruction of
belief in witchcraft during the sixteenth and seventeenth centuries was
achieved in the face of an overwhelming, and still rapidly growing body
of evidence for its reality. Those who denied that witches existed did
not attempt to explain this evidence at all, but successfully urged
that it be disregarded. Glanvill, who was one of the founders of the
Royal Society, not unreasonably denounced this proposal as unscientific,
on the ground of the professed empiricism of contemporary science. Some
of the unexplained evidence for witchcraft was indeed buried for good,
and only struggled painfully to light two centuries later when it was
eventually recognized as the manifestation of hypnotic powers. [16]
It is generally thought that physical theories are less ambiguous than
medical and psychological theories, and can be confirmed or refuted
by harder and cleaner experimental tests. Speaking in relative terms,
this is, of course, true; physics is much closer to the 'ultra-violet'
than to the 'infra-red' end of the continuous spectrum of the sciences
and arts. But a last example will show on what shaky 'empirical evidence'
a generally accepted theory can rest; and in this case I am talking of
the cornerstone of modern physics, Einstein's Theory of Relativity.
According to the story told in the textbooks, the initial impulse
which set Einstein's mind working was a famous experiment carried out
by Michelson and Morley in 1887. They measured the speed of light and
found, so we are told, that it was the same whether the light travelled
in the direction of the earth or in the opposite direction; although in
the first case it ought to have appeared slower, in the second faster,
because in the first case the earth was 'catching up' with the light-ray,
in the second was racing away from it. This unexpected result, so the
story goes, convinced Einstein that it was nonsense to talk of the
earth moving through space which was at rest, as a body moves through a
stationary liquid (the ether); the constancy of the speed of light proved
that Newton's concept of an absolute frame of space, which allowed us
to distinguish between 'motion' and 'rest', had to be dropped.
Now this official account of the genesis of Relativity is not fact
but fiction. In the first place, on Einstein's own testimony the
Michelson-Morley experiment 'had no role in the foundation of the
theory'. That foundation was laid on theoretical, indeed speculative,
considerations. And in the second place, the famous experiment did not in
fact confirm, but contradicted Einstein's theory. The speed of light was
not at all the same in all directions. Light-signals sent 'ahead' along
the earth's orbit travelled slower than signals 'left behind'. It is true
that the difference amounted to only about one-fourth of the magnitude
to be expected on the assumption that the earth was drifting-through a
stationary ether. But the 'ether-drift' still amounted to the respectable
velocity of about five miles per second. The same results were obtained
by D. C. Miller and his collaborators, who repeated the Michelson-Morley
experiments, with more precise instruments, in a series of experiments
extending over twenty-five years (1902 to 1926). The rest of the story
is best told by quoting Polànyi again:
The layman, taught to revere scientists for their absolute respect
for the observed facts, and for the judiciously detached and purely
provisional manner in which they hold scientific theories (always ready
to abandon a theory at the sight of any contradictory evidence) might
well have thought that, at Miller's announcement of this overwhelming
evidence of a 'positive effect' in his presidential address to
the American Physical Society on December 29th, 1925, his audience
would have instantly abandoned the theory of relativity. Or, at the
very least, that scientists -- wont to look down from the pinnacle
of their intellectual humility upon the rest of dogmatic mankind --
might suspend judgement in this matter until Miller's results could be
accounted for without impairing the theory of relativity. But no: by
that time they had so well closed their minds to any suggestion which
threatened the new rationality achieved by Einstein's world-picture,
that it was almost impossible for them to think again in different
terms. Little attention was paid to the experiments, the evidence being
set aside in the hope that it would one day turn out to be wrong. [17]
So it may. Or it may not. Miller devoted his life to disproving Relativity
-- and
on face value
, so far as experimental data are concerned,
he succeeded.* A whole generation later, W. Kantor of the U.S. Navy
Electronics Laboratory repeated once more the 'crucial experiment'. Again
his instruments were far more accurate than Miller's, and again they
seemed to confirm that the speed of light was not independent from the
motion of the observer -- as Einstein's theory demands. And yet the
vast majority of physicists are convinced -- and I think rightly so --
that Einstein's universe is superior to Newton's. Partly this trust is
based on evidence less controversial than the 'crucial' experiments
that I have mentioned; but mainly on the intuitive feeling that the
whole picture 'looks right', regardless of some ugly spots that will,
with God's help, vanish some day. One of the most prominent among them,
Max Born, who inclines to a positivistic philosophy, betrayed his true
feelings, when he hailed the advent of Relativity because it made the
universe of science 'more beautiful and grander'.
Paul Dirac, undoubtedly the greatest living British physicist,
was even more outspoken on the subject. He and my late friend Erwin
Schrödinger shared the Nobel Prize in 1933 as founding fathers of
quantum mechanics. In an article [17a] on the development of modern
physics, Dirac related how Schrödinger discovered his famous wave
equation of the electron. 'Schrödinger got his equation by pure
thought, looking for some beautiful generalization . . . and not by
keeping close to the experimental developments of the subject', Dirac
remarks approvingly. He then continues to describe how Schrödinger,
when he tried to apply his equation, 'got results that did not agree
with experiment. The disagreement arose because at that time it was not
known that the electron has a spin.' This was a great disappointment
to Schrödinger, and induced him to publish, instead of his original
formula, an imperfect (non-relativistic) approximation. Only later on,
by taking the electron's spin into account, did he revert to his original
equation. Dirac concludes:
I think there is a moral to this story, namely that it is more important
to have beauty in one's equations than to have them fit experiment. If
Schrödinger had been more confident of his work, he could have
published it some months earlier, and he could have published a
more accurate equation . . . It seems that if one is working from
the point of view of getting beauty in one's equations, and if one
has really a sound insight, one is on a sure line of progress. If
there is not complete agreement between the results of one's work
and experiment, one should not allow oneself to be too discouraged,
because the discrepancy may well be due to minor features that are not
properly taken into account and that will get cleared up with further
developments of the theory . . .
In other words, a physicist should not allow his subjective conviction
that he is on the right track to be shaken by contrary experimental
data. And vice versa, its apparent confirmation by experimental data does
not necessarily prove a theory to be right. There is a rather hideous
trick used in modern quantum mechanics called the 'renormalization
method'. Dirac's comment on it is:
I am inclined to suspect that the renormalization theory is something
that will not survive in the future, and that the remarkable agreement
between its results and experiment should be looked on as a fluke . . .
I think I have said enough to show that 'scientific evidence' is a rather
elastic term, and that 'verification' is always a relative affair. The
criteria of truth differ from the criteria of beauty in that the former
refer to cognitive, the latter to emotive processes; but neither of them
are absolute. 'The evidence proves' is a statement which is supposed to
confer on science a privileged intimacy with truth which an can never
hope to attain. But 'the evidence proves' that the statement in quotes
is always based on an act of faith. To quote K. R. Popper:

Other books

The Remaining by Travis Thrasher
Harvest Moon by Sharon Struth
La sombra del águila by Arturo Pérez-Reverte
Payment in Kind by J. A. Jance
Obsession by Maya Moss
Mrs. Jafee Is Daffy! by Dan Gutman
Noir by K. W. Jeter