Read Quantum Man: Richard Feynman's Life in Science Online
Authors: Lawrence M. Krauss
Tags: #Science / Physics
Feynman recognized the limitations of his picture, and its distinction from normal model building. As he said in his first paper on the subject, “These suggestions arose in theoretical studies from several directions and do not represent the result of consideration of any one model. They are an extraction of those features which relativity and quantum mechanics and some empirical facts imply almost independently of any model.”
In any case, Feynman’s picture allowed him to consider a process most physicists, who were trying to explain the data with some fundamental model, had avoided. These others had focused on the simplest of all possibilities, where two particles entered a collision volume and two particles exited the region. Feynman however, realized his simple picture would allow him to explore more complicated processes. In these processes, if experimenters banged hadrons together head-on with enough energy so that a lot of particles were produced, they could hope to measure the detailed energies and momenta of at most a few of the outgoing particles. One might think that in this case they would not get much useful information. But Feynman argued, motivated by his parton picture, that these processes, which he called
inclusive processes
, might actually be worth thinking about.
He realized that at very high energies, the effects of relativity would cause each particle, in the frame of the other particle, to look like a pancake, because lengths along the direction of motion are contracted. Moreover, the effects of time dilation would mean the sideways motions of individual partons around the pancake would appear to be slowed to a standstill. Thus, each hadron would look, to the other hadrons, like a collection of pointlike particles at rest inside a pancake. Then, assuming that the subsequent collision would involve one of the partons from each pancake colliding, with the rest simply passing through one another, physicists could make sense of inclusive processes in which only one outgoing particle in the collision is measured in detail and the rest fly off with only general features of their distribution recorded. Feynman suggested that if this picture of the collision was correct, certain measured quantities, like the momentum of the outgoing particle measured in the direction of the incident beam, should have a simple distribution.
Louis Pasteur is reputed to have said, “Fortune favors the prepared mind.” Feynman’s mind was well prepared when he visited SLAC in 1968. The experimentalists there had been analyzing their data, the first data on high-energy electrons scattering on proton targets, producing a huge spray of outgoing stuff, according to the suggestion of a young theorist there, James Bjorken, known universally as “
Bj
.” Bjorken is a determined, mild-mannered, brilliant theorist who often speaks in a language that is unfamiliar, but whose conclusions are worth listening to. So it was at SLAC at the time.
Using detailed ideas from field theory, many of which originated with Gell-Mann, Bjorken had shown in 1967 that if experimenters measured merely the properties of the outgoing electrons in these collisions, they would find regularities in their distribution that would be very different if the proton was composed of pointlike constituents than if it wasn’t. He called these regularities
scaling properties
.
While the experimentalists involved in the SLAC experiments didn’t really understand the detailed theoretical justification for Bjorken’s scaling hypothesis, his suggestions did provide one useful way to analyze their data, so they did. And lo and behold, the data agreed with his predictions. Such agreement, however, did not guarantee that Bjorken’s somewhat obscure suggestion was correct. Perhaps other mechanisms could produce the same effects.
When Feynman visited SLAC, Bjorken was out of town, and Feynman talked directly to the experimentalists, who, needless to say, gave him a better understanding of the results than why or how Bjorken had derived them. Having already thought about the more complicated hadron-hadron collisions, Feynman realized the electron-proton collisions might be easier to analyze, and the observed scaling might have a simple physical explanation in terms of partons.
That evening, he had an epiphany after going to a topless bar for motivation (there remains some dispute about this), and back in his hotel room he was able to demonstrate that the scaling behavior indeed had a simple explanation: in the reference frame in which the proton looked like a pancake to the incoming electrons, if the electrons bounced off individual partons, each of which was essentially independent, then the scaling function that Bjorken had derived could be understood as simply the probability of finding a parton of a given momentum inside of the proton, weighted by the square of the electric charge on that parton.
This was an explanation the experimentalists could understand, and when Bjorken returned from mountain climbing to SLAC, Feynman was still there and he sought out Bjorken to ask him a host of questions about what he knew and didn’t know. Bjorken most vividly remembers the language Feynman used, and how different it was from the way he had thought about things. As he later put it, “It was an easy, seductive language that everyone could understand. It took no time at all for the parton model bandwagon to get rolling.”
Needless to say, Feynman was both satisfied and thrilled by the ability of his simple picture to explain the new data. He and Bjorken also realized that other probes of protons could be used to obtain complementary information on the structure of protons by using incident particles that interact not electromagnetically with partons, but via the weak interaction—namely, neutrinos. Feynman was once again at the center of activity in the field, and by the time he published his first paper on the idea, several years after the fact, the analysis of
deep inelastic scattering
, as it had become called, was where all of the action was being focused.
Of course, the central questions then became, Were partons real? and, if so, Were partons quarks? Feynman recognized that the first question was difficult to answer completely, given the utter simplicity of his model and the likelihood that the actual physical phenomena might be more complicated. Years later, in a book on the subject, he stated his concerns explicitly: “It should be noted that even if our house of cards survives and proves to be right we have not thereby proved the existence of partons. . . . Partons would have been a useful psychological guide as to what relations to expect—and if they continue to serve this way to produce other valid expectations they would of course begin to become ‘real,’ possibly as real as any other theoretical structure invented to describe nature.”
As for the second question, that was doubly difficult. First, it took some time, in this climate where the general theoretical bias was against fundamental particles, before people were willing to seriously consider it, and second, even if the partons did represent quarks, why weren’t they knocked free, for all to see, emerging from the high-energy collisions?
Over time, however, using the formalism that Feynman had developed, physicists were able to extract the properties of the partons from the data, and lo and behold, the fractional charges on these objects became manifest. By the early 1970s Feynman had become convinced that the partons had all of the properties of Gell-Mann’s hypothetical quarks (and Zweig’s aces), though he continued to talk in parton language (perhaps to annoy Gell-Mann). Gell-Mann, for his part, deflected criticism that he had not been willing to believe in the reality of quarks by making fun of Feynman’s simplified picture. Ultimately, because quarks came from a fundamental model, the physics world moved during the 1970s from the parton picture of protons to the quark picture.
But where were the quarks to be found? Why were they hiding inside of protons, and not found lurking anywhere else? And why were they behaving like free particles inside the proton when the strong interaction that governed the collisions of protons with each other, and hence quarks with each other, was the strongest force known in nature?
Remarkably, within a period of five years, not only were these questions about the strong force essentially answered, but theorists had also developed a fundamental understanding of the nature of the weak force as well. A decade after the mess had begun, three of the four known forces in nature were essentially understood. Perhaps the most significant, and still probably one of the less publicly heralded, theoretical revolutions in the history of our fundamental understanding of nature had been largely completed. The experimentalists at SLAC who had discovered scaling, and hence quarks, won the Nobel Prize in 1990, and the theorists who developed our current “standard model” of the weak and strong forces won Nobel Prizes in 1979, 1997, and 2004.
R
EMARKABLY, FEYNMAN’S WORK,
both during this period and during the previous five years, largely and directly helped make this revolution possible. In the process, without aiming to, and perhaps without his ever fully appreciating the consequences, Feynman’s work contributed to a new understanding of the very nature of scientific truth. This in turn implied that his own work on QED was not a kluge but provided a fundamental new physical understanding of why sensible theories of nature on scales we can measure produce finite results.
The story of how all this happened begins, coincidentally, with the work Gell-Mann did with his colleague Francis Low in Illinois in 1953–54. Their paper, which had impressed Feynman when Gell-Mann first visited Caltech, concluded that the effective magnitude of the electric charge on the electron would vary with size, getting larger as one moves closer in and penetrates the cloud of virtual electron-positron pairs that were shielding the charge.
A little bit farther east, in the summer of 1954 Frank Yang and his office mate Robert Mills, at Brookhaven Laboratory on Long Island, motivated by the success of QED as an explanation of nature, published a paper in which they postulated a possible generalization of the theory that they thought might be appropriate for understanding strong nuclear forces.
In QED the electromagnetic force is propagated by the exchange of massless particles, photons. The form of the equations for the electromagnetic interaction is strongly restricted by a symmetry called gauge symmetry, which essentially ensures that the photon is massless, and the interaction is therefore long range, as I have previously described. Note that in electromagnetism, photons couple to electric charges, and photons themselves are electrically neutral.
Yang and Mills suggested a more complicated version of gauge invariance in which many different types of “photons” could be exchanged between many types of “charges,” and some of the photons could themselves be charged, which means they would interact with themselves and other photons. The symmetries of these new Yang-Mills equations, as they became known, were both fascinating and suggestive. The strong force didn’t seem to distinguish between protons and neutrons, for example, so inventing a symmetry between them, as well as a charged photon-like particle that could somehow couple to and convert one into the other made some physical sense. Moreover, the success in removing infinities in QED depended crucially on the gauge symmetry of that theory, so using it as a basis for the new theory made sense.
The problem was that the gauge symmetry of the new equations would in general require the new photons to be massless, but because the strong interactions are short range, operating only on nuclear scales, in practice they would have to be massive. How exactly this might happen they had no notion, so their paper was not really a model but more an idea.
In spite of these problems, aficionados, like Julian Schwinger and Murray Gell-Mann, continued to return to the idea of Yang-Mills theories during the 1950s and 1960s because they felt their mathematical structure might provide the key to understanding either the weak or the strong force, or both. Interestingly, the group symmetries of Yang-Mills theory could be expressed using the same kind of group theory language that Gell-Mann later used as a classification scheme for the strongly interacting particles.
Schwinger assigned his graduate student Sheldon Glashow the task of thinking about what kind of group structure and what kind of Yang-Mills theory might describe the symmetries associated with the weak interaction. In 1961 Glashow not only found a candidate symmetry, but also showed rather remarkably that it could be combined with the gauge symmetry in QED to produce a model in which both the weak interaction and the electromagnetic interaction arose from the same set of gauge symmetries, and that in this model the photon of QED would be accompanied by three other
gauge bosons
, as the new type of photons became known. The problem was that, once again, the weak interaction was short range while electromagnetism was long range, and Glashow didn’t explain how this difference could be accommodated. The moment one gave masses to the new particles, the gauge symmetry, and with it the beauty and potential mathematical consistency, of the model would disappear.
Part of the problem was that no one really knew how to convert Yang-Mills theories into fully consistent quantum field theories like QED. The mathematics was more cumbersome, and the motivation wasn’t there to embark on such a task. Enter Richard Feynman. When he first started working on gravity as a quantum theory, the mathematical problems were so difficult that he turned to Gell-Mann for advice. Gell-Mann suggested that he first solve a simpler problem. He told Feynman about Yang-Mills theories and argued that the symmetries inherent in these theories were very similar to, but less intimidating than, those associated with the theory of general relativity.
Feynman took Gell-Mann’s advice and analyzed the quantum properties of Yang-Mills theories and made a number of seminal discoveries, which he wrote up in detail only years later. In particular, he discovered that to get consistent Feynman rules for the quantum theory, one had to add a fictional particle to internal loops to make the probabilities work out correctly. Later two Russian physicists, Ludvig Faddeev and Victor Popov, rediscovered this, and the particles are now called Faddeev-Popov ghost bosons. Moreover, Feynman also discovered a new general theorem about Feynman diagrams in quantum field theories, relating diagrams with internal virtual particle loops to those without such loops.