The Singularity Is Near: When Humans Transcend Biology (35 page)

Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

BOOK: The Singularity Is Near: When Humans Transcend Biology
4.56Mb size Format: txt, pdf, ePub

Anderson’s concern, however, does not reflect the scope of the broad and painstaking effort by tens of thousands of brain and computer scientists to methodically test out the limits and capabilities of models and simulations before taking them to the next step. We are not attempting to disassemble and reconfigure the brain’s trillions of parts without a detailed analysis at each stage. The process of understanding the principles of operation of the brain is proceeding through a series of increasingly sophisticated models derived from increasingly accurate and high-resolution data.

As the computational power to emulate the human brain approaches—we’re almost there with supercomputers—the efforts to scan and sense the human brain and to build working models and simulations of it are accelerating. As with every other projection in this book, it is critical to understand the exponential nature of progress in this field. I frequently encounter colleagues who argue that it will be a century or longer before we can understand in detail the methods of the brain. As with so many long-term scientific projections, this one is based on a linear view of the future and ignores the inherent acceleration of progress, as well as the exponential growth of each underlying technology. Such overly conservative views are also frequently based on an underestimation
of the breadth of contemporary accomplishments, even by practitioners in the field.

Scanning and sensing tools are doubling their overall spatial and temporal resolution each year. Scanning-bandwidth, price-performance, and image-reconstruction times are also seeing comparable exponential growth. These trends hold true for all of the forms of scanning: fully noninvasive scanning, in vivo scanning with an exposed skull, and destructive scanning. Databases of brain-scanning information and model building are also doubling in size about once per year.

We have demonstrated that our ability to build detailed models and working simulations of subcellular portions, neurons, and extensive neural regions follows closely upon the availability of the requisite tools and data. The performance of neurons and subcellular portions of neurons often involves substantial complexity and numerous nonlinearities, but the performance of neural clusters and neuronal regions is often simpler than their constituent parts. We have increasingly powerful mathematical tools, implemented in effective computer software, that are able to accurately model these types of complex hierarchical, adaptive, semirandom, self-organizing, highly nonlinear systems. Our success to date in effectively modeling several important regions of the brain shows the effectiveness of this approach.

The generation of scanning tools now emerging will for the first time provide spatial and temporal resolution capable of observing in real time the performance of individual dendrites, spines, and synapses. These tools will quickly lead to a new generation of higher-resolution models and simulations.

Once the nanobot era arrives in the 2020s we will be able to observe all of the relevant features of neural performance with very high resolution from inside the brain itself. Sending billions of nanobots through its capillaries will enable us to noninvasively scan an entire working brain in real time. We have already created effective (although still incomplete) models of extensive regions of the brain with today’s relatively crude tools. Within twenty years, we will have at least a millionfold increase in computational power and vastly improved scanning resolution and bandwidth. So we can have confidence that we will have the data-gathering and computational tools needed by the 2020s to model and simulate the entire brain, which will make it possible to combine the principles of operation of human intelligence with the forms of intelligent information processing that we have derived from other AI research. We will also benefit from the inherent strength of machines in storing, retrieving, and quickly sharing massive amounts of information. We will then be in a position to implement these powerful hybrid systems on computational platforms
that greatly exceed the capabilities of the human brain’s relatively fixed architecture.

The Scalability of Human Intelligence
. In response to Hofstadter’s concern as to whether human intelligence is just above or below the threshold necessary for “self-understanding,” the accelerating pace of brain reverse engineering makes it clear that there are no limits to our ability to understand ourselves—or anything else, for that matter. The key to the scalability of human intelligence is our ability to build models of reality in our mind. These models can be recursive, meaning that one model can include other models, which can include yet finer models, without limit. For example, a model of a biological cell can include models of the nucleus, ribosomes, and other cellular systems. In turn, the model of the ribosome may include models of its submolecular components, and then down to the atoms and subatomic particles and forces that it comprises.

Our ability to understand complex systems is not necessarily hierarchical. A complex system like a cell or the human brain cannot be understood simply by breaking it down into constituent subsystems and their components. We have increasingly sophisticated mathematical tools for understanding systems that combine both order and chaos—and there is plenty of both in a cell and in the brain—and for understanding the complex interactions that defy logical breakdown.

Our computers, which are themselves accelerating, have been a critical tool in enabling us to handle increasingly complex models, which we would otherwise be unable to envision with our brains alone. Clearly, Hofstadter’s concern would be correct if we were limited just to models that we could keep in our minds without technology to assist us. That our intelligence is just above the threshold necessary to understand itself results from our native ability, combined with the tools of our own making, to envision, refine, extend, and alter abstract—and increasingly subtle—models of our own observations.

Uploading the Human Brain

 

To become a figment of your computer’s imagination.

                   —D
AVID
V
ICTOR DE
T
RANSEND
,
G
ODLING’S
G
LOSSARY
,
DEFINITION OF “UPLOAD

 

A more controversial application than the scanning-the-brain-to-understand-it scenario is
scanning the brain to upload it
. Uploading a human brain means
scanning all of its salient details and then reinstantiating those details into a suitably powerful computational substrate. This process would capture a person’s entire personality, memory, skills, and history.

If we are truly capturing a particular person’s mental processes, then the reinstantiated mind will need a body, since so much of our thinking is directed toward physical needs and desires. As I will discuss in
chapter 5
, by the time we have the tools to capture and re-create a human brain with all of its subtleties, we will have plenty of options for twenty-first-century bodies for both nonbiological humans and biological humans who avail themselves of extensions to our intelligence. The human body version 2.0 will include virtual bodies in completely realistic virtual environments, nanotechnology-based physical bodies, and more.

In
chapter 3
I discussed my estimates for the memory and computational requirements to simulate the human brain. Although I estimated that 10
16
cps of computation and 10
13
bits of memory are sufficient to emulate human levels of intelligence, my estimates for the requirements of uploading were higher: 10
19
cps and 10
18
bits, respectively. The reason for the higher estimates is that the lower ones are based on the requirements to re-create regions of the brain at human levels of performance, whereas the higher ones are based on capturing the salient details of each of our approximately 10
11
neurons and 10
14
interneuronal connections. Once uploading is feasible, we are likely to find that hybrid solutions are adequate. For example, we will probably find that it is sufficient to simulate certain basic support functions such as the signal processing of sensory data on a functional basis (by plugging in standard modules) and reserve the capture of subneuron details only for those regions that are truly responsible for individual personality and skills. Nonetheless, we will use our higher estimates for this discussion.

The basic computational resources (10
19
cps and 10
18
bits) will be available for one thousand dollars in the early 2030s, about a decade later than the resources needed for functional simulation. The scanning requirements for uploading are also more daunting than for “merely” re-creating the overall powers of human intelligence. In theory one could upload a human brain by capturing all the necessary details without necessarily comprehending the brain’s overall plan. In practice, however, this is unlikely to work. Understanding the principles of operation of the human brain will reveal which details are essential and which details are intended to be disordered. We need to know, for example, which molecules in the neurotransmitters are critical, and whether we need to capture overall levels, position and location, and/or molecular shape. As I discussed above, we are just learning, for example, that it is the position of actin molecules and the shape of CPEB molecules in the synapse that
are key for memory. It will not be possible to confirm which details are crucial without having confirmed our understanding of the theory of operation. That confirmation will be in the form of a functional simulation of human intelligence that passes the Turing test, which I believe will take place by 2029.
119

To capture this level of detail will require scanning from within the brain using nanobots, the technology for which will be available by the late 2020s. Thus, the early 2030s is a reasonable time frame for the computational performance, memory, and brain-scanning prerequisites of uploading. Like any other technology, it will take some iterative refinement to perfect this capability, so the end of the 2030s is a conservative projection for successful uploading.

We should point out that a person’s personality and skills do not reside only in the brain, although that is their principal location. Our nervous system extends throughout the body, and the endocrine (hormonal) system has an influence, as well. The vast majority of the complexity, however, resides in the brain, which is the location of the bulk of the nervous system. The bandwidth of information from the endocrine system is quite low, because the determining factor is overall levels of hormones, not the precise location of each hormone molecule.

Confirmation of the uploading milestone will be in the form of a “Ray Kurzweil” or “Jane Smith” Turing test, in other words convincing a human judge that the uploaded re-creation is indistinguishable from the original specific person. By that time we’ll face some complications in devising the rules of any Turing test. Since nonbiological intelligence will have passed the original Turing test years earlier (around 2029), should we allow a nonbiological human equivalent to be a judge? How about an enhanced human? Unenhanced humans may become increasingly hard to find. In any event, it will be a slippery slope to define enhancement, as many different levels of extending biological intelligence will be available by the time we have purported uploads. Another issue will be that the humans we seek to upload will not be limited to their biological intelligence. However, uploading the nonbiological portion of intelligence will be relatively straightforward, since the ease of copying computer intelligence has always represented one of the strengths of computers.

One question that arises is, How quickly do we need to scan a person’s nervous system? It clearly cannot be done instantaneously, and even if we did provide a nanobot for each neuron, it would take time to gather the data. One might therefore object that because a person’s state is changing during the data-gathering process, the upload information does not accurately reflect that person at an instant in time but rather over a period of time, even if only a fraction of a second.
120
Consider, however, that this issue will not interfere with an
upload’s passing a “Jane Smith” Turing test. When we encounter one another on a day-to-day basis, we are recognized as ourselves even though it may have been days or weeks since the last such encounter. If an upload is sufficiently accurate to re-create a person’s state within the amount of natural change that a person undergoes in a fraction of a second or even a few minutes, that will be sufficient for any conceivable purpose. Some observers have interpreted Roger Penrose’s theory of the link between quantum computing and consciousness (see
chapter 9
) to mean that uploading is impossible because a person’s “quantum state” will have changed many times during the scanning period. But I would point out that my quantum state has changed many times in the time it took me to write this sentence, and I still consider myself to be the same person (and no one seems to be objecting).

Nobel Prize winner Gerald Edelman points out that there is a difference between a capability and a description of that capability. A photograph of a person is different from the person herself, even if the “photograph” is very high resolution and three-dimensional. However, the concept of uploading goes beyond the extremely high-resolution scan, which we can consider the “photograph” in Edelman’s analogy. The scan does need to capture all of the salient details, but it also needs to be instantiated into a working computational medium that has the capabilities of the original (albeit that the new nonbiological platforms are certain to be far more capable). The neural details need to interact with one another (and with the outside world) in the same ways that they do in the original. A comparable analogy is the comparison between a computer program that resides on a computer disk (a static picture) and a program that is actively running on a suitable computer (a dynamic, interacting entity). Both the data capture and the reinstantiation of a dynamic entity constitute the uploading scenario.

Other books

Feral by Sheri Whitefeather
Easterleigh Hall at War by Margaret Graham
Shattered Souls by Mary Lindsey
The Good Life by Tony Bennett
The Briton by Catherine Palmer
Virtues of War by Steven Pressfield