Quantum Man: Richard Feynman's Life in Science (25 page)

Read Quantum Man: Richard Feynman's Life in Science Online

Authors: Lawrence M. Krauss

Tags: #Science / Physics

BOOK: Quantum Man: Richard Feynman's Life in Science
13.29Mb size Format: txt, pdf, ePub

Feynman derived a beautiful physical example of Bell’s work by showing that if one tried to mock up a classical computer that could produce the exact same probabilities that a quantum system would produce for some observable quantities as the system evolved, then the probability of some other observable quantity would need to be negative. Such negative probabilities make no physical sense. In a very real way, the world of quantum probabilities is larger than anything that can be embedded in a purely classical world.

While providing a nice physical demonstration of why all hidden-variable theories are doomed to failure, Feynman also asked a more interesting question in his paper. Would it be possible to invent a computer that was quantum mechanical in nature? Namely, if the fundamental computer bits were quantum objects, like, say, the spin of an electron, could one then numerically simulate exactly the behavior of any quantum mechanical system, and thus address quantum mechanical simulations that no classical computer could efficiently handle?

His initial answer in the 1982 paper was a resounding “probably.” However, he continued to think about this question, spurred on by work by Charles Bennett, a physicist at IBM laboratories who had demonstrated that much of the conventional wisdom about the physics of computing was incorrect. In particular, the assumption was that every time a computer does a computation, it would have to dissipate energy as heat (after all, anyone who has worked with a laptop knows how hot it can get). However, Bennett showed that it is possible, in principle, to perform a computer computation “reversibly.” In other words, it is theoretically possible to perform such a calculation and then perform exactly the reverse operations and end up where one began, without any loss of energy to heat.

The question arose, would the quantum mechanical world, with all of its quantum fluctuations, spoil this result? In a paper in 1985, Feynman demonstrated that the answer was no. But in order to do this, he had to come up with a theoretical model for a universal quantum computer—namely, a purely quantum mechanical system whose evolution could be controlled to produce the necessary logical elements that are part of a universal computing system (that is, “and”, “not”, “or,” etc.). He developed a model for such a computer and described how one might operate it in principle, thus concluding, “At any rate, it seems that the laws of physics present no barrier to reducing the size of computers until bits are the size of atoms, and quantum behavior holds dominant sway.”

While the general physical question he was addressing was fairly academic, he realized that the possibility of actually building a computer that was so small that the laws of quantum mechanics would govern the behavior of its individual elements might be of real practical interest. Following his statement that the quantum computer he had theoretically sketched out had been designed to mimic classical computers with each logical operation being done sequentially, he added, almost as a throw away, “What can be done, in these reversible quantum systems, to gain the speed available by concurrent operations has not been studied here.” The possibility suggested by this single line could easily change our world. Once again, Feynman had suggested an idea that would dominate research developments in an entire field for a generation, even if he himself did not produce the later, seminal results.

The field of quantum computing has become one of the most exciting areas of theoretical and experimental interest, precisely because of Feynman’s argument that classical computers could never exactly mimic quantum systems. Quantum systems are much richer, and therefore it is possible that a “quantum computer” could perform new types of computational algorithms which would allow it to efficiently and realistically complete a calculation that might take the biggest classical computer available today longer than the age of the universe.

The key idea is really the simple feature that Feynman so explicitly exploited in Los Alamos, and that the Feynman path-integral formulation of quantum mechanics so explicitly displays. Quantum systems will, by their very nature, explore an infinite number of different paths at the same time. If each path could be made to represent a specific computation, then a quantum system might be nature’s perfect parallel processor.

Consider the system Feynman discussed first, a simple quantum mechanical particle with two spin states, which we might label “up” and “down.” If we call the up state “1” and the down state “0,” then this spin system describes a typical single computational bit of information. However, the important feature of such a quantum mechanical system is that until we measure it to be in the up or down state, quantum mechanics tells us that it has a finite probability of being in either state, which is tantamount to saying that it is really in both states at the same time. This makes a quantum bit, or
qubit
, as it has now become known, much different from a classical bit. If we can find ways of operating on such a qubit without actually measuring it, and therefore forcing it into a specific state, the possibility exists of having a single quantum processor doing more than one computation at the same time.

In 1994, Peter Shor, an applied mathematician at Bell Laboratories, demonstrated the potential power of such a system, and the world took notice. Shor showed that a quantum mechanical computer could efficiently solve a specific mathematical problem that had proved to be impossible for classical computers to solve in less than what was effectively an infinite time. The problem is simply stated: Every number can be written uniquely as the product of prime numbers. For example, 15 is 3 × 5, 99 is 11 × 3 × 3, 54 is 2 × 3 × 3 × 3, and so on. As numbers get larger and larger, it becomes exponentially more difficult to determine this unique decomposition. What Shor proved was that an algorithm could be developed for a quantum computer to explore the space of prime factors of any number and derive the correct decomposition.

Why should we care about this rather obscure result? Well, for those of us who have money in a bank, or use credit cards for transactions, or are concerned about the security of the codes used to keep national secrets safe, then this should matter a lot. All modern banking and national security information is encrypted using a simple code that is impossible for any classical computer to break. The encryption is performed using a “key” that is based on knowing the prime factors of a very large number. Unless we know the factors in advance, we cannot break the code using a normal computer because it would take longer than the age of the universe to do so. However, a sufficiently “large” quantum computer could do the job in a manageable time. What
large
means depends on the complexity of the problem, but systems involving a few hundred or thousand qubits would easily be up to the task.

Should we rush out and take our money out of the bank and hide it under the bed, or rush into our survivalist shelter and await the impending invasion following the breakdown of our national security codes? Obviously not. In the first place, in spite of the huge resources being devoted to ongoing experimental efforts, no one has been able to build a quantum computer out of more than a few qubits. The reason is simple. In order for the computer to behave quantum mechanically, the qubits must be carefully isolated from all outside interactions, which would effectively wash out the quantum mechanical information stored in the system—the same reason we behave classically and not quantum mechanically. In most systems what is normally called
quantum coherence
—the preservation of the quantum mechanical configuration of the separate components of the system—is destroyed in a microscopic fraction of a second. Keeping quantum computers “quantum-like” is a major challenge, and no one knows if this will ultimately be practical in an operational sense.

More important than this practical consideration is the fact that the same quantum mechanical principles that allow a quantum computer to obviate the classical limits in solving such problems as prime factorization also would make possible, in principle, the development of new “quantum transmission” algorithms that allow a completely secure transfer of information from point to point. By this I mean that we would be able to determine with absolute certainty whether a snooping third party has intercepted a message.

The explosion of ideas that has pushed the new field of quantum computing from a twinkle in Feynman’s eye in 1960 to the forefront of modern science and technology has been vast, too vast to properly describe here. Ultimately these ideas might lead to changes in the way modern industrialized societies are organized. In a practical sense, these research developments might represent some of Feynman’s most important intellectual legacies, even if he didn’t live long enough to fully appreciate the significance of his suggestions. It never ceases to amaze me how seemingly esoteric speculations by a bold and creative mind can help change the world.

CHAPTER
17

Truth, Beauty, and Freedom

I don’t feel frightened by not knowing things, by being lost in a mysterious universe without any purpose, which is the way it really is, as far as I can tell. Possibly. It doesn’t frighten me.

—R
ICHARD
F
EYNMAN

O
n October 8, 1967, the
New York Times Magazine
ran a story titled “Two Men in Search of the Quark.” The author, Lee Edson, proclaimed, “The men largely responsible for sending scientists on this wild quark chase are two California Institute of Technology physicists named Murray Gell-Mann and Richard Feynman. . . . One California scientist calls the two men ‘the hottest properties in theoretical physics today.’ ”

At the time, the latter statement was justified. The former was not, however. Over the previous six to seven years, following their joint work on the weak interaction, Feynman had steadily decoupled from the rush to make sense of the emerging confusion in particle physics, as the growing zoo of strongly interacting particles coming out of accelerators seemed designed to taunt all of those who, like the sailors in the
Odyssey
, were attracted to the siren’s call, only to crash against the rocks. Gell-Mann, on the other hand, had attacked the situation head-on, with every tool at his and his colleagues’ disposal and, after a number of struggles and false starts, had finally brought some hope of clarity to the field.

Feynman summarized the situation in a talk at the sixtieth birthday celebration for Hans Bethe at Cornell, echoing at the beginning statements he made in his first paper on liquid helium in the early 1950s: “One of the reasons why I haven’t done anything much with the strongly interacting particles is that I thought there wasn’t enough information to get a good idea. My good colleague, Professor Gell-Mann, is perpetually proving me incorrect. . . . We suddenly hear the noises of the crackling of the breaking of the nut.”

To get a sense of the confusion that reigned in particle physics during the first half of that decade, we need only reflect on the current best approach to understanding quantum gravity and the associated possible “theory of everything.” There are many ideas but little data to guide physicists, and the more we follow up on the theoretical proposals, the more confusing the situation seems to be. In the 1960s, to be sure, machines were producing a lot more data, but no one knew where it was leading. Had anyone suggested in 1965 that within a decade we would develop an almost complete theoretical basis for understanding not only the weak but also the strong force, most physicists would have been incredulous.

Gell-Mann had indeed cracked open the nut with a remarkable insight. At a time when many physicists were considering giving up on even the possibility of developing an understanding of particle physics using the techniques that had worked so well with QED, Gell-Mann, in 1961, discovered the importance of group theory, which gave him a mathematical tool to classify the plethora of new elementary particles according to their symmetry properties. Amazingly, all of the different particles seemed to fall within various
multiplets
, as they were called, in which each particle could be transformed into another particle by the application of a symmetry transformation associated with the group. These symmetries are like the rotations I described earlier, which can leave certain figures, like triangles and circles, looking the same. In Gell-Mann’s scheme (and as independently discovered by several others around the world), the different particles fell into sets of representations whose properties (charge and strangeness, for example) could be graphed so that they formed the vertices of a polyhedra, and all of the particles in each polyhedra could then be transformed into each other by symmetries, which could effectively rotate the polyhedra in different directions.

The group that Gell-Mann found could classify strongly interacting particles was labeled
SU(3)
. It basically had eight different internal rotations that could connect particles in different polyhedral-like multiplets of various sizes, although the most obvious representation would have eight members. In this way he was able to classify almost all of the known strongly interacting particles. Exhuberant over the success of his classification scheme (although both he and the rest of the community were far from convinced it was right, based on the evidence then at hand), he called it the “eightfold way,” not just because of the numerical property of SU(3), but in a typical Gell-Mann–like fashion, because of a saying of the Buddha about the eight ways to achieve nirvana: “Now this, O monks, is noble truth that leads to the cessation of pain; this is the noble Eightfold Way; namely right views, right intention, right speech, right action, right living, right effort, right mindfulness, right concentration.”

When Gell-Mann, as well as the Israeli physicist Yuval Ne’eman, classified the particles this way, one set of nine particles could not be so classified. However, it was known that there was a ten-member representation of the SU(3) symmetry group—a so-called decouplet—that they both independently indicated might be an appropriate choice, suggesting the need for another, as yet undiscovered particle. Gell-Mann quickly announced that such a new particle must exist, which he called the omega-minus, and he sketched out, using symmetry arguments, what its expected properties should be so that experimentalists could look for it.

Needless to say, in a search with all of the drama of a screenplay, just as the experimenters were ready to give up, they found Gell-Mann’s particle, with precisely the properties he had predicted, including its strangeness, and a mass within 1 percent of his prediction. The eightfold way had not only survived, it had flourished!

The day after the experimental discovery of omega-minus, at the end of January in 1964, a paper by Gell-Mann appeared in the European physics journal
Physics Letters
. He had decided that his outlandish speculation, and yet another new linguistic gem, would never make it past the strident referees of the U.S. journal
Physical Review
.

It had not been lost on Gell-Mann, and others as well, that the
3
in SU(3) might have some physical significance. The eight-element multiplets in SU(3) actually could be formed by appropriate combinations of three copies of a smaller representation of the symmetry group, called the fundamental representation, containing three elements. Could it be that these three elements corresponded, in some way, to elementary particles?

The problem was that if strongly interacting particles like protons were made up of three sub-constituents, then these sub-constituents would generally have, by comparison, a fractional electric charge. One of the hallmarks of physics, however, was that all observed particles had electric charges that were integral multiples of the charge on the electron and proton (which had equal and opposite charges). No one knew why this was the case, and to some extent we still don’t know. But that is what nature seemed to require.

Nevertheless, after a year or so of discussion and trepidation, spurred on by the discovery of a wonderful line in James Joyce’s
Finnegans Wake
—“Three quarks for Muster Mark!”—Gell-Mann wrote a short two-page paper proposing that the eightfold way as a fundamental classification scheme for all strongly interacting particles made mathematical sense if the fundamental constituents of this scheme were three different fractionally charged objects, which he called
quarks
.

Gell-Mann was wary of proposing the existence of a whole new set of exotic, and potentially ridiculous, particles, and by that time the conventional wisdom in the community had swayed toward the idea that fundamental particles themselves might be ill-conceived, and that all elementary particles might be made up of combinations of other elementary particles, in what was called a kind of
nuclear democracy
. Therefore, Gell-Mann was careful to argue that these objects, which he called
up
,
down
, and
strange
quarks, might be just mathematical niceties that allowed the accounting to be done efficiently.

Remarkably, a former Caltech graduate student of Feynman’s, George Zweig, who was now a postdoctoral researcher at CERN, the European accelerator laboratory, completed an exactly similar proposal, presented in much more detail, at almost the same time. Moreover, Zweig was much more willing to suggest that these new fractionally charged objects, which he called
aces
, might be real. When he saw Gell-Mann’s short paper in print, he quickly tried to get his eighty-page paper published by the
Physical Review
. But Gell-Mann had been wiser, and Zweig was never able to get his work published in that staid journal.

Needless to say, with Gell-Mann’s incredible line of greatest hits, from
V
-
A
, to the omega-minus, it was inevitable that quarks would win out over aces. That is not to say, however, that the physics community reacted with enthusiasm to Gell-Mann’s proposal. Instead, it received it with all of the excitement of an unwelcome pregnancy. After all, where were the fractionally charged particles? Searches in everything from accelerator data to the inside of oysters turned up nothing. And thus, even after the
New York Times
had canonized quarks in its 1967 article, Gell-Mann was quoted as saying the quark was likely to turn out to be merely “a useful mathematical figment.”

So it was in 1967 that Feynman had finally decided to return to his first love, particle physics, to see what interesting problems he could attack. Despite offering complimentary words about Gell-Mann for the
Times
article, he had not shown much enthusiasm for the work that Gell-Mann had done over the past five years to drive his field forward. He had been highly skeptical of the omega-minus discovery, and quarks had seemed uninteresting—so uninteresting that when his own former student Zweig had proposed aces, Feynman had shown no enthusiasm whatsoever for that idea either. He found the effort of theorists to seek comfort in the language of group theory too much like a crutch that replaced actual understanding. He described how physicists would repeat themselves using the language of mathematics like “simple baby talk, like boo-boo.”

While we might suspect that Feynman’s reactions were tinged with envy, it is more likely that his natural skepticism was combined with his essential disinterest with what other theorists were thinking. He had thus far held true to the idea that the strong interaction data was too confusing to allow productive theoretical explanation, and he had avoided all of the failed theoretical fads of the 1960s, including the idea of nuclear democracy and its opposition to fundamental particles. His joy was solving problems, and solving them himself. As he said at the time, following a dictum “DISREGARD” that he gave to himself after winning the Nobel Prize, “I have only to explain the regularities of nature—I don’t have to explain the methods of my friends.” Nevertheless, he had started once again to teach a course on particle physics, and that meant catching up on the field. For Feynman, that meant catching up on the minutiae of the experimental data.

It turned out to be the right time to do so. A new particle accelerator had come online in Northern California, near Stanford, and thus not too far away from Caltech. This new accelerator was based on a different technique for exploring strongly interacting particles. Instead of smashing these particles together and seeing what happened, the SLAC machine, as it became known, accelerated electrons on a two-mile-long track and smashed them into nuclei. Since electrons don’t feel the strong force, scientists could interpret their collisions more easily, without the uncertainties of the strong interaction. In this way, they hoped to probe the nucleus just as Ernest Rutherford had done seventy-five years earlier when he discovered the existence of the nucleus by shooting alpha particles at atoms. In the summer of 1968, Feynman decided to visit SLAC during a trip to visit his sister and discover for himself what was happening.

Feynman had already been thinking of how to make sense of the experimental data regarding strongly interacting particles, and I expect that he was influenced by his work on liquid helium. Remember that he had tried to understand how a dense system of atoms and electrons in a liquid could behave at low temperatures as if the atoms were not interacting with each other.

A somewhat similar behavior was suggested by the results of early experiments involving the complex scattering of strongly interacting particles off of each other. In spite of his hesitation to explain the data with some fundamental theory, or maybe because of it, Feynman realized he could explain some general features without recourse to any specific detailed theoretical model. One of the implications of the experimental results was that the collisions mostly took place on the scale of the particles involved, like protons, and not on smaller scales. He reasoned that if the protons had internal constituents, these constituents could not be interacting strongly with each other on smaller scales, or that would have been manifest in the data. Therefore, one could choose to picture strongly interacting particles, or
hadrons
, as they were called, with a simple toy model: a box full of constituents, which he called
partons
, that didn’t interact strongly on small scales but were somehow constrained to remain within the hadrons.

The idea was what we call
phenomenological
—namely, it was just a way to make sense of the data, to see if one could probe for regularities in the morass, in order to get some clues of the underlying physics, just as Feynman’s picture of liquid helium had done. Of course, Feynman was aware of Gell-Mann’s quarks and Zweig’s aces, but he was not trying to produce some grand fundamental understanding of hadrons. Rather he wanted to understand how to extract useful information from experiments and so he made no attempt to connect his parton picture to their particles.

Other books

Dark Reservations by John Fortunato
Entromancy by M. S. Farzan
Dead Corse by Phaedra Weldon
Around My French Table by Dorie Greenspan
Loving Lucy by Lynne Connolly