Farewell to Reality (35 page)

Read Farewell to Reality Online

Authors: Jim Baggott

BOOK: Farewell to Reality
12.5Mb size Format: txt, pdf, ePub

The reason is perhaps not all that hard to understand. The Copenhagen interpretation's insistence on the distinction between microscopic quantum system and classical macroscopic measurer or observer is fine in practice (if not in principle) when we're trying to deal with routine ‘everyday' measurements in the laboratory. But early theories of quantum cosmology, culminating in the development of the ACDM model, tell us that there was a time when the
entire universe
was a quantum system.

When American theorist Bryce DeWitt, who had also been a student of Wheeler, developed an early version of a theory of quantum gravity featuring a ‘wavefunction of the universe', this heralded the beginnings of a quantum cosmology. There was no escaping the inadequacy of the Copenhagen interpretation in dealing with such a wavefunction. If we assume that everything there is is ‘inside' the universe, then there can be no measuring device or observer sitting outside whose purpose is to collapse the wavefunction of the universe and make it ‘real'.

DeWitt, for one, was convinced that there could be no place for a special or privileged ‘observer' in quantum theory. In 1973, together with his student Neill Graham, he popularized Everett's approach as the ‘many worlds' interpretation of quantum theory, publishing Everett's original (unedited) Princeton PhD thesis in a book alongside a series of companion articles.

In this context, the different ‘worlds' are the different branches of the universe that split apart when a measurement is made. According to this interpretation, the observer is unaware that the universe has split. The observer records the single result ‘up'. She scratches her head and concludes that the wavefunction has somehow mysteriously collapsed. She is unaware of her parallel self, who is also scratching her head and concluding that the wavefunction has collapsed to give the single result ‘down'.

If this were really happening, how come we remain unaware of it? Wouldn't we retain some sense that the universe has split? The answer
given by early proponents of the Everett formulation is that the laws of quantum theory simply do not allow us to make this kind of observation.

In a footnote added to the proofs of his 1957 paper, Everett accepted the challenge that a universe that splits every time we make a quantum measurement appears to contradict common experience (and sense). However, he went on to note that when Copernicus first suggested that the earth revolves around the sun (and not the other way around), this view was initially criticized on the grounds that nobody had ever directly experienced the motion of the earth through space. Our inability to sense that the earth is moving was eventually explained by Galileo's theory of inertia. Likewise, Everett argued, our inability to sense a splitting of the universe is explained by quantum physics.

If the act of quantum measurement has no special place in the many worlds interpretation, then there is no reason to define measurement as being distinct from any process involving a quantum transition between initial and final states. Now, we would be safe to assume that there have been a great many quantum transitions since the big bang origin of the universe. Each transition will have therefore split the universe into as many worlds as there were contributions in all the different quantum superpositions. DeWitt estimated that there must by now be more than a googol (10
100
) worlds.

As Wheeler himself remarked, the many worlds interpretation is cheap on assumptions, but expensive with universes.

As the different worlds in which the different outcomes are realized are completely disconnected from each other, there is a sense in which Everett's formulation anticipated the emergence of decoherence theory. Indeed, variants of the many worlds interpretation that have been developed since DeWitt resurrected it have explicitly incorporated decoherence, avoiding the problem of objectification by assuming that different outcomes are realized in different worlds.

The original Everett conception of a universe which splits into multiple copies has undergone a number of reinterpretations. Oxford theorist David Deutsch argued that it is wrong to think of the universe splitting with each quantum interaction, and proposed instead that there exist a possibly infinite number of parallel worlds or parallel universes, among which the different outcomes are somehow
partitioned. These parallel universes have become collectively known as the ‘multiverse'.
*

Are we really supposed to take this seriously?

It is certainly true that an increasing number of theorists are sufficiently perplexed by the quantum measurement problem that they are willing to embrace many worlds, despite all its metaphysical baggage. At a scientific workshop on the interpretation of quantum theory held in August 1997, the participants conducted an informal poll. Of the 48 votes recorded, 13 (or 27 per cent) were cast in favour of the Copenhagen interpretation. The second most popular choice, attracting eight votes (17 per cent) was the many worlds interpretation. Tegmark reported:

Although the poll was highly informal and unscientific (several people voted more than once, many abstained, etc.), it nonetheless indicated a rather striking shift in opinion compared to the old days when the Copenhagen interpretation reigned supreme. Perhaps most striking of all is that the many worlds interpretation … proposed by Everett in 1957 but virtually unnoticed for about a decade, has survived 25 years of fierce criticism and occasional ridicule to become the number one challenger to the leading orthodoxy…
9

Before we go any further, I think we should probably remind ourselves of precisely what it is we're dealing with here. The many worlds interpretation singularly avoids assumptions regarding the collapse of the wavefunction and does not seek to supplement or complete quantum theory in any way. Rather, it takes the framework provided by quantum theory's deterministic equations and insists that this is all there is.

But the consequence of this approach is that we are led inexorably to the biggest assumption of all:

The Many Worlds Assumption.
The different possible outcomes of a quantum measurement or the different possible final states of a quantum transition are all assumed to be realized, but in different equally real worlds which either split from one another or exist in parallel.

The reality check

Attempts to rehabilitate the many worlds interpretation by devising versions that appear to carry less metaphysical baggage have been broadly frustrated. Despite the reasonableness of these versions and their use of less colourful language, the measurement problem remains particularly stubborn. We might at this stage be inclined to think that, after all, there's no real alternative to the many worlds interpretation's more extreme logic. Perhaps we should just embrace it and move on?

But before we do, it's time for another reality check.

The first thing we should note is that the many worlds interpretation is not a single, consistent theory that carries the support of all who declare themselves ‘Everettians'. It is not even a single, consistent interpretation. It's more correct to say that there is a loose-knit group of theorists who buy into the idea that the quantum measurement problem can be solved by invoking the many worlds assumption. But each has rather different ideas about precisely how this assumption should be made and how the practical mechanics should be handled. As Cambridge theorist Adrian Kent recently noted:

Everettian ideas have been around for 50 years, and influential for at least the past 30. Yet there has never been a consensus among theoretical physicists either that an Everettian account of quantum theory can be made precise and made to work, or that the Everettian programme has been comprehensively refuted.
10

One of the single biggest challenges to the acceptance of the many worlds interpretation lies in the way it handles probability. In conventional quantum theory in which we assume wavefunction collapse, we use the modulus squares of the amplitudes of the different components in the wavefunction to determine the probabilities that
these components will give specific measurement outcomes. Although we might have our suspicions about what happens to those components that ‘disappear' when the wavefunction collapses, we know that if we continue to repeat the measurement on identically prepared systems, then all the outcomes will be realized with frequencies related to their quantum probabilities.

Let's use an example. Suppose I form a superposition state which consists of both ‘up' and ‘down' (it doesn't matter what actual properties I'm measuring). I take 0.866 times the ‘up' wavefunction and 0.500 times the ‘down' wavefunction and add them together. I now perform a measurement on this superposition to determine if the state is either ‘up' or ‘down'. I cannot predict which result I will get for any specific measurement, but I can determine that the probability of getting the result ‘up' is given by 0.866
2
, or 0.75 (75 per cent), and the probability of getting the result ‘down' is 0.500
2
, or 0.25 (25 per cent).
*
This means that in 100 repeated experiments, I would expect to get ‘up' 75 times and ‘down' 25 times. And this is precisely what happens in practice.

How does this translate to the many worlds scenario? The many worlds assumption suggests that for each measurement, the universe splits or partitions such that each ‘up' result in one world is matched by a ‘down' result in a parallel world, and vice versa. If in this world we record the sequence ‘up', ‘up', ‘down', ‘up' (consistent with the 75:25 probability ratio of conventional quantum theory), then is it the case that the other we in another world record the sequence ‘down', ‘down', ‘up', ‘down'? If so, then this second sequence is clearly no longer consistent with the probabilities we would calculate based on the original superposition.

Things are obviously not quite so simple.

Perhaps each repeated measurement splits or partitions the sequence among more and more worlds? But how then do we recover a sequence that is consistent with our experience in the one world we do observe? Should the probabilities instead be applied somehow to the worlds
themselves? What would this mean for parallel worlds that are meant to be equally real? Should we extend the logic to include an infinite number of parallel worlds?

In
The Hidden Reality,
Brian Greene acknowledges the probability problem in a footnote:

So from the standpoint of observers (copies of the experimenter) the vast majority would see spin-ups and spin-downs in a ratio that does not agree with the quantum mechanical predictions … In some sense, then … (the vast majority of copies of the experimenter) need to be considered as ‘nonexistent'. The challenge lies in understanding what, if anything, that means.
11

As we dig deeper into the problem, we realize that the seductive simplicity that the many worlds interpretation seemed to offer at first sight is, in fact, an illusion.

Various solutions to the problem of probability in the many worlds interpretation have been advanced, and Everett himself was convinced that he had shown in his PhD thesis how the quantum mechanical probabilities can be recovered. But it is fair to say that this is the subject of ongoing debate, even within the community of Everettians.

Eternal inflation

The multiverse interpretation that we have so far considered has been invoked as a potential solution to the quantum measurement problem. Its purpose is to free the wavefunction from the collapse assumption so that it can be applied to describe the universe that we observe. If this was as far as it went, then perhaps the multiverse of many worlds would not provoke more than an occasional research paper, a chapter or two in contemporary texts on quantum theory and the odd dry academic conference.

But there's obviously more to it than this.

We have seen how cosmic inflation provides solutions to the horizon and flatness problems in big bang cosmology. However, inflation is more a technique than a single cut and dried theory, and it may come as no surprise to discover that there are many ways in which it can be applied. One alternative approach was developed in 1983 by
Russian theorist Alexander Vilenkin, and was further elaborated in 1986 by Russian-born theorist Andrei Linde.

One of the problems with the current ACDM model of big bang cosmology is that it leaves us with quite a few unanswered (and possibly unanswerable) questions. Perhaps one of the most perturbing is the fine-tuning problem. In order to understand why the universe we inhabit has the structure and the physical laws that it has, we need to wind the clock back to the very earliest moments of its existence. But it is precisely here that our theories break down or run beyond their domain of applicability.

But now here's a thought. We can try to determine the special circumstances that prevailed during the very earliest moments of the big bang, circumstances that shaped the structure and physical laws that we observe. Alternatively, we can question just how ‘special' these circumstances actually were. Just as Copernicus argued that the earth is not the centre of the universe; just as modern astronomy argues that our planet orbits an unexceptional star in an unexceptional galaxy in a vast cosmos, could we argue that our entire universe is no more than a relatively unexceptional bit of inflated spacetime in an unimaginably vaster multiverse?

If inflation as a technique is freed from the constraint that it should apply uniquely to our universe, then all sorts of interesting things become possible. Linde discovered that it was possible to construct theories on a rather grander scale:

There is no need for quantum gravity effects, phase transitions, supercooling or even the standard assumption that the universe originally was hot. One just considers all possible kinds and values of scalar [i.e. inflaton] fields in the early universe and then checks to see if any of them leads to inflation. Those places where inflation does not occur remain small. Those domains where inflation takes place become exponentially large and dominate the total volume of the universe. Because the scalar fields can take arbitrary values in the early universe, I called this scenario chaotic inflation.
12

Other books

The Grass Castle by Karen Viggers
Tempted by Marion, Elise
Ghost in the Razor by Jonathan Moeller
To Seduce a Sinner by Elizabeth Hoyt
The Revealing by Suzanne Woods Fisher
Unraveled Together by Wendy Leigh
Dragon Spear by Jessica Day George