Read Quantum Man: Richard Feynman's Life in Science Online
Authors: Lawrence M. Krauss
Tags: #Science / Physics
B
ECAUSE THE INTELLECTUAL
excitement of the possibilities he outlined might not be great enough, Feynman decided in 1960 to personally fund two “Feynman” prizes of $1,000. The first would go to “the first guy who can take the information on the page of a book and put it on an area 1/25,000 smaller in linear scale in such manner that it can be read by an electron microscope.” The second would go to “the first guy who makes an operating electric motor—a rotating electric motor which can be controlled from the outside and, not counting the lead-in wires, is only 1/64 inch cube.” (Alas, Feynman was a product of his time, and in spite of the fact that his sister was a physicist, for him, physicists and engineers were guys.)
In spite of his foresight, Feynman was somewhat behind the time. Much to his surprise (and also disappointment because it really didn’t involve any new technology), within a year of the publication of his speech, a gentleman named William McLellan appeared at Feynman’s door with a wooden box and a microscope to view his little motor, and claim the second prize. Feynman, who hadn’t formally set up the prize structure, nevertheless made good on the $1,000. However, in a letter to McLellan, he added, “I don’t intend to make good on the other one. Since writing the article I’ve gotten married and bought a house!” He needn’t have worried. It took twenty-five years before anyone—in this case, a (male) Stanford University graduate student—successfully followed Feynman’s prescription and claimed his prize. By that time $1,000 was not such a significant amount of money.
I
N SPITE OF
these prizes, and his fascination with practical machines (Feynman continued to consult periodically for companies like Hughes Aircraft throughout his professional life, even at times when he was devoting the major part of his research efforts to things like strange particles and gravity), the idea that most intrigued Feynman in his 1959 lecture, and which he essentially stated as the reason why he was considering these problems in the first place, and the only one he really followed up on later professionally, was the possibility of making a faster, smaller, and totally different kind of computer.
Feynman had long been fascinated by computing machines and computing in general (MIT computer scientist Marvin Minsky has said, incredibly, that Feynman once told him that he was always more interested in computing than physics), an interest reaching an early peak perhaps during his years at Los Alamos, where these activities were vital to the success of the atomic bomb program. He developed totally new algorithms for quickly performing mental estimates of otherwise impenetrably complex quantities and for solving complex differential equations. Recognizing his abilities, even though he was barely out of graduate school, Hans Bethe had made him a group leader on the calculational group, which performed pencil and paper calculations first, then calculations with the hand-operated, clunky machines called Marchant calculators, and finally using new electronic computing machines (which, you may recall, Feynman and his team had removed from boxes and put together before the IBM experts arrived to do so). The group calculated everything from the diffusion of neutrons in a bomb, necessary to determine how much material was needed for a critical mass, to simulating the implosion process vital to the success of a plutonium bomb. He had been nothing short of amazing in every aspect, leading Bethe, with whom he would have mental arithmetic jousting contests, to say he would rather lose any other two physicists than Feynman.
Well before the arrival of the electronic computer, Feynman helped create what might be called the first parallel-processing human computer, presaging the large-scale parallel processors to come. He had already worked his diffusion group into a tightly coordinated team, so that one day when Bethe came in and asked the group to numerically integrate some quantity, Feynman announced, “All right, pencils, calculate!” and everyone flipped their pencils in the air in unison (a trick that he had taught them). This was more than merely play. In the era before the electronic computer, complicated calculations had to be broken up into pieces in order to be performed quickly. Each individual computation was too complicated for any one person or for one Marchant calculator. But he organized a large group, comprising mostly wives of the scientists there, each of whom handled a simple part of a complex calculation and then passed it on to the next person down the line.
Through his experiences, Feynman became intimately familiar with the detailed workings of a computer—how to break problems down so a computer could solve them (turning the computer into an efficient “file clerk,” as he called it), and even more interestingly, determining which problems could be solved in a reasonable amount of time and which couldn’t. All of this came back to him as he began to think about his process of miniaturization. How small could computers be? What challenges lay ahead, and what gains, in power usage and in the ability to compute, if smaller, more complex computers with more elements could be made? As he put it in 1959, comparing his brain to computers then extant,
The number of elements in this bone box of mine are enormously greater than the number of elements in our ‘‘wonderful’’ computers. But our mechanical computers are too big; the elements in this box are microscopic. I want to make some that are submicroscopic. . . . If we wanted to make a computer that had all these marvelous extra qualitative abilities, we would have to make it, perhaps, the size of the Pentagon. This has several disadvantages. First, it requires too much material; there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing. There is also the problem of heat generation and power consumption. . . . But an even more practical difficulty is that the computer would be limited to a certain speed. Because of its large size, there is finite time required to get the information from one place to another. The information cannot go any faster than the speed of light—so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.
While Feynman outlined in his 1959 lecture the intellectual challenges and opportunities that led to so many future developments, this last question was the only one he seriously returned to in any detail, and in surprising directions that combined a number of the different possibilities he mentioned in his talk. It took him over twenty years to do so, however. The motivation for his return arose in part from his interest in his son, Carl. By the late 1970s Carl had gone to college, to Feynman’s alma mater, MIT, and, happily for Feynman, had switched his area of study from philosophy to computer science. Feynman got interested in thinking more about the field his son was working in. He introduced Carl to MIT professor Marvin Minsky, whom he had met in California, and Minsky introduced Carl to a graduate student living in his basement named Danny Hillis. Hillis had the crazy idea to start a company that would build a giant computer with a million separate processors computing in parallel and communicating with each other through a sophisticated routing system. Carl introduced his father to Hillis—actually he suggested Hillis visit his father when he, Hillis, was out in California. Much to Hillis’s surprise, Feynman drove two hours to meet him at the airport to learn more about the project, which he immediately labeled as “kooky,” meaning that he would think about its possibilites and practicalities. This machine, after all, would be the modern electronic version of the human parallel computer he had created at Los Alamos. This, combined with the fact that his son was involved, made the opportunity irresistible.
In fact, when Hillis actually started the company Thinking Machines, Feynman volunteered to spend the summer of 1983 working in Boston (along with Carl), but he refused to give vague general “advice” based on his scientific expertise, calling that “a bunch of balony,” and demanded something “real to do.” He ultimately derived a solution for how many computer chips each router needed to communicate with in order to allow a parallel calculation to be successful. What was striking about his solution was that it was not formulated using any of the traditional techniques of computer science, but rather ideas from physics, including thermodynamics and statistical mechanics. And more important, even though he disagreed with the estimates of the other computer engineers at the company, he turned out to be right. (At the same time he showed how their computer could be put to good use to solve physics problems that numerically challenged other machines, including problems involved in simulating configurations of elementary particle physics systems.)
Around this time, in 1981, he also started to think more deeply about the theoretical foundations of computing itself, and he co-taught a course with Caltech colleagues John Hopfield and Carver Mead that covered issues ranging from pattern recognition to the issue of computability itself. The former was something that had always fascinated him, and about which he had created some outlandish and at the time unworkable proposals for computers. Pattern recognition is still beyond the capabilities of most computers, which is why when you log in to some Web sites, to distinguish human users from automated computer viruses or hackers, they present a picture with letters askew and require you to type what you see before you can proceed.
It was this area, the physics of computation, and the related issue, the computation of physics, that ultimately captured Feynman’s attention. He produced a series of scientific papers and a book of lecture notes, published posthumously (after some legal wrangling about his estate), from the course he taught on this subject beginning in 1983.
For a while he was fascinated by the notion of cellular automata, which he discussed at length with a young wunderkind student at Caltech, Stephen Wolfram, who later went on to become famous as the creator of the computer mathematics package Mathematica, which has revolutionized much of the way people do numerical and analytical calculations nowadays. Cellular automata are basically a set of discrete objects on an array that can be programmed to obey simple rules in each timestep of a computer process, depending on the state of their nearest neighbors. Even very simple rules can produce incredibly complicated patterns. Feynman was undoubtedly interested in whether the real world might work this way, with very basic and local rules for each spacetime point at its basis, ultimately producing the complexity seen at larger scales.
But not surprisingly, his primary attention turned to issues in computing and quantum mechanics. He asked himself how one might need to change the algorithms for a computer to simulate a quantum mechanical system rather than a classical one. After all, the fundamental physical rules were different. The system in question would need to be treated probabilistically, and as he had shown in his reformulation of the quantum world, in order to appropriately follow its time evolution one needed to calculate the probability amplitudes (and not the probabilities) of many different alternative paths at the same time. Once again, quantum mechanics as he had formulated it naturally begged for a computer that could perform different calculations in parallel, combining the results at the end of the calculation.
His fascinating ruminations on the subject, contained in a series of papers written between 1981 and 1985, led him in a new direction that hearkened back to his 1959 proposal. Instead of using a classical computer to simulate the quantum mechanical work, could one design a computer with elements so small as to be themselves governed by the rules of quantum mechanics, and if so, how would this change the way a computer could compute?
Feynman’s interest in this question apparently came from his continuing interest in understanding quantum mechanics. One might think that he, if anyone, understood how quantum mechanics worked, but in the 1981 lecture and paper where he first discussed this, he made a confession that reveals more about his rationale for choosing problems to think about—in this case quantum computers—than it does about his own lack of comfort with quantum mechanics:
Might I say immediately, so that you know where I really intend to go, that we always have had (secret, secret, close the doors!)—we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents. At least I do, because I’m an old enough man that I haven’t got to the point that this stuff is obvious to me. Okay, I still get nervous with it. And therefore, some of the younger students . . . you know how it always is, every new idea, it takes a generation or two until it becomes obvious that there’s no real problem. It has not yet become obvious to me that there’s no real problem, but I’m not sure there’s no real problem. So that’s why I like to investigate things. Can I learn anything from asking this question about computers—about this may-or-may-not-be mystery as to what the world view of quantum mechanics is?
To investigate this very question, Feynman considered whether it was possible to exactly simulate quantum mechanical behavior with a classical computing system that operates just with classical probabilities. The answer has to be no. If it were yes, that would be tantamount to saying that the real quantum mechanical world was mathematically equivalent to a classical world in which some quantities are not measured. In such a world one would be able to determine only the probabilistic outcomes of the variables one could measure because one wouldn’t know the value of these “hidden variables.” In this case the probability of any observable event would depend on an unknown, the value of the unobserved quantity. While this imaginary world sounds suspiciously like the world of quantum mechanics (and it was the world Albert Einstein hoped we lived in—namely, a sensible classical world where the weird probabilistic nature of quantum mechanics was due only to our ignorance of the fundamental physical parameters of nature), the quantum world is far more weird than that, as John Bell demonstrated in 1964 in a remarkable paper. Like it or not, a quantum world and a classical world can never be equivalent.