Read The Singularity Is Near: When Humans Transcend Biology Online
Authors: Ray Kurzweil
Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com
The exciting property of spintronics is that no energy is required to change an electron’s spin state. Stanford University physics professor Shoucheng Zhang and University of Tokyo professor Naoto Nagaosa put it this way: “We have discovered the equivalent of a new ‘Ohm’s Law’ [the electronics law that states that current in a wire equals voltage divided by resistance]. . . . [It] says that the spin of the electron can be transported without any loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials
already widely used in the semiconductor industry, such as gallium arsenide. That’s important because it could enable a new generation of computing devices.”
28
The potential, then, is to achieve the efficiencies of superconducting (that is, moving information at or close to the speed of light without any loss of information) at room temperature. It also allows multiple properties of each electron to be used for computing, thereby increasing the potential for memory and computational density.
One form of spintronics is already familiar to computer users: magneto-resistance (a change in electrical resistance caused by a magnetic field) is used to store data on magnetic hard drives. An exciting new form of nonvolatile memory based on spintronics called MRAM (magnetic random-access memory) is expected to enter the market within a few years. Like hard drives, MRAM memory retains its data without power but uses no moving parts and will have speeds and rewritability comparable to conventional RAM.
MRAM stores information in ferromagnetic metallic alloys, which are suitable for data storage but not for the logical operations of a microprocessor. The holy grail of spintronics would be to achieve practical spintronics effects in a semiconductor, which would enable us to use the technology both for memory and for logic. Today’s chip manufacturing is based on silicon, which does not have the requisite magnetic properties. In March 2004 an international group of scientists reported that by doping a blend of silicon and iron with cobalt, the new material was able to display the magnetic properties needed for spintronics while still maintaining the crystalline structure silicon requires as a semiconductor.
29
An important role for spintronics in the future of computer memory is clear, and it is likely to contribute to logic systems as well. The spin of an electron is a quantum property (subject to the laws of quantum mechanics), so perhaps the most important application of spintronics will be in quantum computing systems, using the spin of quantum-entangled electrons to represent qubits, which I discuss below.
Spin has also been used to store information in the nucleus of atoms, using the complex interaction of their protons’ magnetic moments. Scientists at the University of Oklahoma also demonstrated a “molecular photography” technique for storing 1,024 bits of information in a single liquid-crystal molecule comprising nineteen hydrogen atoms.
30
Computing with Light.
Another approach to SIMD computing is to use multiple beams of laser light in which information is encoded in each stream of photons. Optical components can then be used to perform logical and arithmetic
functions on the encoded information streams. For example, a system developed by Lenslet, a small Israeli company, uses 256 lasers and can perform eight trillion calculations per second by performing the same calculation on each of the 256 streams of data.
31
The system can be used for applications such as performing data compression on 256 video channels.
SIMD technologies such as DNA computers and optical computers will have important specialized roles to play in the future of computation. The replication of certain aspects of the functionality of the human brain, such as processing sensory data, can use SIMD architectures. For other brain regions, such as those dealing with learning and reasoning, general-purpose computing with its “multiple instruction multiple data” (MIMD) architectures will be required. For high-performance MIMD computing, we will need to apply the three-dimensional molecular-computing paradigms described above.
Quantum Computing.
Quantum computing is an even more radical form of SIMD parallel processing, but one that is in a much earlier stage of development compared to the other new technologies we have discussed. A quantum computer contains a series of qubits, which essentially are zero and one at the same time. The qubit is based on the fundamental ambiguity inherent in quantum mechanics. In a quantum computer, the qubits are represented by a quantum property of particles—for example, the spin state of individual electrons. When the qubits are in an “entangled” state, each one is simultaneously in both states. In a process called “quantum decoherence” the ambiguity of each qubit is resolved, leaving an unambiguous sequence of ones and zeroes. If the quantum computer is set up in the right way, that decohered sequence will represent the solution to a problem. Essentially, only the correct sequence survives the process of decoherence.
As with the DNA computer described above, a key to successful quantum computing is a careful statement of the problem, including a precise way to test possible answers. The quantum computer effectively tests every possible
combination
of values for the qubits. So a quantum computer with one thousand qubits would test 2
1,000
(a number approximately equal to one followed by 301 zeroes) potential solutions simultaneously.
A thousand-bit quantum computer would vastly outperform any conceivable DNA computer, or for that matter any conceivable nonquantum computer. There are two limitations to the process, however. The first is that, like the DNA and optical computers discussed above, only a special set of problems is amenable to being presented to a quantum computer. In essence, we need to be able to test each possible answer in a simple way.
The classic example of a practical use for quantum computing is in factoring very large numbers (finding which smaller numbers, when multiplied together, result in the large number). Factoring numbers with more than 512 bits is currently not achievable on a digital computer, even a massively parallel one.
32
Interesting classes of problems amenable to quantum computing include breaking encryption codes (which rely on factoring large numbers). The other problem is that the computational power of a quantum computer depends on the number of entangled qubits, and the state of the art is currently limited to around ten bits. A ten-bit quantum computer is not very useful, since 2
10
is only 1,024. In a conventional computer, it is a straightforward process to combine memory bits and logic gates. We cannot, however, create a twenty-qubit quantum computer simply by combining two ten-qubit machines. All of the qubits have to be quantum-entangled together, and that has proved to be challenging.
A key question is: how difficult is it to add each additional qubit? The computational power of a quantum computer grows exponentially with each added qubit, but if it turns out that adding each additional qubit makes the engineering task exponentially more difficult, we will not be gaining any leverage. (That is, the computational power of a quantum computer will be only linearly proportional to the engineering difficulty.) In general, proposed methods for adding qubits make the resulting systems significantly more delicate and susceptible to premature decoherence.
There are proposals to increase significantly the number of qubits, although these have not yet been proved in practice. For example, Stephan Gulde and his colleagues at the University of Innsbruck have built a quantum computer using a single atom of calcium that has the potential to simultaneously encode dozens of qubits—possibly up to one hundred—using different quantum properties within the atom.
33
The ultimate role of quantum computing remains unresolved. But even if a quantum computer with hundreds of entangled qubits proves feasible, it will remain a special-purpose device, although one with remarkable capabilities that cannot be emulated in any other way.
When I suggested in
The Age of Spiritual Machines
that molecular computing would be the sixth major computing paradigm, the idea was still controversial. There has been so much progress in the past five years that there has been a sea change in attitude among experts, and this is now a mainstream view. We already have proofs of concept for all of the major requirements for three-dimensional molecular computing: single-molecule transistors, memory cells based on atoms, nanowires, and methods to self-assemble and self-diagnose the trillions (potentially trillions of trillions) of components.
Contemporary electronics proceeds from the design of detailed chip layouts to photolithography to the manufacturing of chips in large, centralized factories. Nanocircuits are more likely to be created in small chemistry flasks, a development that will be another important step in the decentralization of our industrial infrastructure and will maintain the law of accelerating returns through this century and beyond.
The Computational Capacity of the Human Brain
It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty. . . . Since 1990, the power available to individual AI and robotics programs has doubled yearly, to 30 MIPS by 1994 and 500 MIPS by 1998. Seeds long ago alleged barren are suddenly sprouting. Machines read text, recognize speech, even translate languages. Robots drive cross-country, crawl across Mars, and trundle down office corridors. In 1996 a theorem-proving program called EQP running five weeks on a 50 MIPS computer at Argonne National Laboratory found a proof of a Boolean algebra conjecture by Herbert Robbins that had eluded mathematicians for sixty years. And it is still only Spring. Wait until Summer.
—H
ANS
M
ORAVEC
, “W
HEN
W
ILL
C
OMPUTER
H
ARDWARE
M
ATCH THE
H
UMAN
B
RAIN
?” 1997
What is the computational capacity of a human brain? A number of estimates have been made, based on replicating the functionality of brain regions that have been reverse engineered (that is, the methods understood) at human levels of performance. Once we have an estimate of the computational capacity for a particular region, we can extrapolate that capacity to the entire brain by considering what portion of the brain that region represents. These estimates are based on functional simulation, which replicates the overall functionality of a region rather than simulating each neuron and interneuronal connection in that region.
Although we would not want to rely on any single calculation, we find that various assessments of different regions of the brain all provide reasonably close estimates for the entire brain. The following are order-of-magnitude estimates, meaning that we are attempting to determine the appropriate figures to
the closest multiple of ten. The fact that different ways of making the same estimate provide similar answers corroborates the approach and indicates that the estimates are in an appropriate range.
The prediction that the Singularity—an expansion of human intelligence by a factor of trillions through merger with its nonbiological form—will occur within the next several decades does not depend on the precision of these calculations. Even if our estimate of the amount of computation required to simulate the human brain was too optimistic (that is, too low) by a factor of even one thousand (which I believe is unlikely), that would delay the Singularity by only about eight years.
34
A factor of one million would mean a delay of only about fifteen years, and a factor of one billion would be a delay of about twenty-one years.
35
Hans Moravec, legendary roboticist at Carnegie Mellon University, has analyzed the transformations performed by the neural image-processing circuitry contained in the retina.
36
The retina is about two centimeters wide and a half millimeter thick. Most of the retina’s depth is devoted to capturing an image, but one fifth of it is devoted to image processing, which includes distinguishing dark and light, and detecting motion in about one million small regions of the image.
The retina, according to Moravec’s analysis, performs ten million of these edge and motion detections each second. Based on his several decades of experience in creating robotic vision systems, he estimates that the execution of about one hundred computer instructions is required to re-create each such detection at human levels of performance, meaning that replicating the image-processing functionality of this portion of the retina requires 1,000 MIPS. The human brain is about 75,000 times heavier than the 0.02 grams of neurons in this portion of the retina, resulting in an estimate of about 10
14
(100 trillion) instructions per second for the entire brain.
37
Another estimate comes from the work of Lloyd Watts and his colleagues on creating functional simulations of regions of the human auditory system, which I discuss further in
chapter 4
.
38
One of the functions of the software Watts has developed is a task called “stream separation,” which is used in teleconferencing and other applications to achieve telepresence (the localization of each participant in a remote audio teleconference). To accomplish this, Watts explains, means “precisely measuring the time delay between sound sensors that are separated in space and that both receive the sound.” The process involves pitch analysis, spatial position, and speech cues, including language-specific cues. “One of the important cues used by humans for localizing the position of a sound source is the Interaural Time Difference (ITD), that is, the difference in time of arrival of sounds at the two ears.”
39