IS THE BRAIN BIG ENOUGH?
Is our conception of human neuron functioning and our estimates of the number of neurons and connections in the human brain consistent with what we know about the brain’s capabilities? Perhaps human neurons are far more capable than we think they are. If so, building a machine with human-level capabilities might take longer than expected.
We find that estimates of the number of concepts-“chunks” of knowledge-that a human expert in a particular field has mastered are remarkably consistent: about 50,000 to 100,000. This approximate range appears to be valid over a wide range of human endeavors: the number of board positions mastered by a chess grand master, the concepts mastered by an expert in a technical field, such as a physician, the vocabulary of a writer (Shakespeare used 29,000 words;
19
this book uses a lot fewer).
This type of professional knowledge is, of course, only a small subset of the knowledge we need to function as human beings. Basic knowledge of the world, including so-called common sense, is more extensive. We also have an ability to recognize patterns: spoken language, written language, objects, faces. And we have our skills: walking, talking, catching balls. I believe that a reasonably conservative estimate of the general knowledge of a typical human is a thousand times greater than the knowledge of an expert in her professional field. This provides us a rough estimate of 100 million chunks-bits of understanding, concepts, patterns, specific skills-per human. As we will see below, even if this estimate is low (by a factor of up to a thousand), the brain is still big enough.
The number of neurons in the human brain is estimated at approximately 100 billion, with an average of 1,000 connections per neuron, for a total of 100 trillion connections. With 100 trillion connections and 100 million chunks of knowledge (including patterns and skills), we get an estimate of about a million connections per chunk.
Our computer simulations of neural nets use a variety of different types of neuron models, all of which are relatively simple. Efforts to provide detailed electronic models of real mammalian neurons appear to show that while animal neurons are more complicated than typical computer models, the difference in complexity is modest. Even using our simpler computer versions of neurons, we find that we can model a chunk of knowledge-a face, a character shape, a phoneme, a word sense-using as little as a thousand connections per chunk. Thus our rough estimate of a million neural connections in the human brain per human knowledge chunk appears reasonable.
Indeed it appears ample. Thus we could make my estimate (of the number of knowledge chunks) a thousand times greater, and the calculation still works. It is likely, however, that the brain’s encoding of knowledge is less efficient than the methods we use in our machines. This apparent inefficiency is consistent with our Understanding that the human brain is conservatively designed. The brain relies on a large degree of redundancy and a relatively low density of information storage to gain reliability and to continue to function effectively despite a high rate of neuron loss as we age. My conclusion is that it does not appear that we need to contemplate a model of information processing of individual neurons that is significantly model complex than we currently understand in order to explain human capa bility. The brain is big enough.
But we don’t need to simulate the entire evolution of the human brain in order to tap the intricate secrets it contains. Just as a technology company will take apart and “reverse engineer” (analyze to understand the methods of) a rival’s products, we can do the same with the human brain. It is, after all, the best example we can get our hands on of an intelligent process. We can tap the architecture, organization, and innate knowledge of the human brain in order to greatly accelerate our understanding of how to design intelligence in a machine. By probing the brain’s circuits, we can copy and imitate a proven design, one that took its original designer several billion years to develop. (And it’s not even copyrighted.)
As we approach the computational ability to simulate the human brain—we’re not there today, but we will begin to be in about a decade’s time—such an effort will be intensely pursued. Indeed, this endeavor has already begun.
For example, Synaptics’ vision chip is fundamentally a copy of the neural organization, implemented in silicon of course, of not only the human retina, but the early stages of mammalian visual processing. It has captured the essence of the algorithm of early mammalian visual processing, an algorithm called center surround filtering. It is not a particularly complicated chip, yet it realistically captures the essence of the initial stages of human vision.
There is a popular conceit among observers, both informed and uninformed, that such a reverse engineering project is infeasible. Hofstadter worries that “our brains may be too weak to understand themselves.”
20
But that is not what we are finding. As we probe the brain’s circuits, we find that the massively parallel algorithms are far from incomprehensible. Nor is there anything like an infinite number of them. There are hundreds of specialized regions in the brain, and it does have a rather ornate architecture, the consequence of its long history. The entire puzzle is not beyond our comprehension. It will certainly not be beyond the comprehension of twenty-first-century machines.
The knowledge is right there in front of us, or rather inside of us. It is not impossible to get at. Let’s start with the most straightforward scenario, one that is essentially feasible today (at least to initiate).
We start by freezing a recently deceased brain.
Now, before I get too many indignant reactions, let me wrap myself in Leonardo da Vinci’s cloak. Leonardo also received a disturbed reaction from his contemporaries. Here was a guy who stole dead bodies from the morgue, carted them back to his dwelling, and then took them apart. This was before dissecting dead bodies was in style. He did this in the name of knowledge, not a highly valued pursuit at the time. He wanted to learn how the human body works, but his contemporaries found his activities bizarre and disrespectful. Today we have a different view, that expanding our knowledge of this wondrous machine is the most respectful homage we can pay We cut up dead bodies all the time to learn more about how living bodies work, and to teach others what we have already learned.
There’s no difference here in what I am suggesting. Except for one thing: I am talking about the brain, not the body. This strikes closer to home. We identify more with our brains than our bodies. Brain surgery is regarded as more invasive than toe surgery. Yet the value of the knowledge to be gained from probing the brain is too valuable to ignore. So we’ll get over whatever squeamishness remains.
As I was saying, we start by freezing a dead brain. This is not a new concept—Dr. E. Fuller Torrey, a former supervisor at the National Institute of Mental Health and now head of the mental health branch of a private research foundation, has 44 freezers filled with 226 frozen brains.
21
Torrey and his associates hope to gain insight into the causes of schizophrenia, so all of his brains are of deceased schizophrenic patients, which is probably not ideal for our purposes.
We examine one brain layer—one very thin slice—at a time. With suitably sensitive two-dimensional scanning equipment we should be able to see every neuron and every connection represented in each synapse-thin layer. When a layer has been examined and the requisite data stored, it can be scraped away to reveal the next slice. This information can be stored and assembled into a giant three-dimensional model of the brain’s wiring and neural topology
It would be better if the frozen brains were not already dead long before freezing. A dead brain will reveal a lot about living brains, but it is clearly not the ideal laboratory. Some of that deadness is bound to reflect itself in a deterioration of its neural structure. We probably don’t want to base our designs for intelligent machines on dead brains. We are likely to be able to take advantage of people who, facing imminent death, will permit their brains to be destructively scanned just slightly before rather than slightly after their brains would have stopped functioning on their own. Recently, a condemned killer allowed his brain and body to be scanned and you can access all 10 billion bytes of him on the Internet at the Center for Human Simulation’s “Visible Human Project” web site.
22
There’s an even higher resolution 25-billion-byte female companion on the site as well. Although the scan of this couple is not high enough resolution for the scenario envisioned here, it’s an example of donating one’s brain for reverse engineering. Of course we may not want to base our templates of machine intelligence on the brain of a convicted killer, anyway
Easier to talk about are the emerging noninvasive means of scanning our brains. I began with the more invasive scenario above because it is technically much easier. We have in fact the means to conduct a destructive scan today (although not yet the bandwidth to scan the entire brain in a reasonable amount of time). In terms of noninvasive scanning, high-speed, high-resolution magnetic resonance imaging (MRI) scanners are already able to view individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerful MRIs are being developed that will be capable of scanning individual nerve fibers that are only ten microns (millionths of a meter) in diameter. These will be available during the first decade of the twenty-first century. Eventually we will be able to scan the presynaptic vesicles that are the site of human learning.
We can peer inside someone’s brain today with MRI scanners, which are increasing their resolution with each new generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (that is, speed of transmission), lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. (It is easier to get someone deceased to sit still, for one thing.) But noninvasively scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.
A new scanning technology called optical imaging, developed by Professor Amiram Grinvald at Israel’s Weizmann Institute, is capable of significantly higher resolution than MRI. Like MRI, it is based on the interaction between electrical activity in the neurons and blood circulation in the capillaries feeding the neurons. Grinvald’s device is capable of resolving features smaller than fifty microns, and can operate in real time, thus enabling scientists to view the firing of individual neurons. Grinvald and researchers at Germany’s Max Planck Institute were struck by the remarkable regularity of the patterns of neural firing when the brain was engaged in processing visual information.
23
One of the researchers, Dr. Mark Hübener, commented that “our maps of the working brain are so orderly they resemble the street map of Manhattan rather than, say, of a medieval European town.” Grinvald, Hübener, and their associates were able to use their brain scanner to distinguish between sets of neurons responsible for perception of depth, shape, and color. As these neurons interact with one another, the resulting pattern of neural firings resembles elaborately linked mosaics. From the scans, it was possible for the researchers to see how the neurons were feeding information to each other. For example, they noted that the depth perception neurons were arranged in parallel columns, providing information to the shape-detecting neurons that formed more elaborate pinwheel-like patterns. Currently, the Grinvald scanning technology is only able to image a thin slice of the brain near its surface, but the Weizmann Institute is working on refinements that will extend its three-dimensional capability. Grinvald’s scanning technology is also being used to boost the resolution of MRI scanning. A recent finding that near-infrared light can pass through the skull is also fueling excitement about the ability of optical imaging as a high-resolution method of brain scanning.
The driving force behind the rapidly improving capability of noninvasive scanning technologies such as MRI is again the Law of Accelerating Returns, because it requires massive computational ability to build the high-resolution, three-dimensional images from the raw magnetic resonance patterns that an MRI scanner produces. The exponentially increasing computational ability provided by the Law of Accelerating Returns (and for another fifteen to twenty years, Moore’s Law) will enable us to continue to rapidly improve the resolution and speed of these noninvasive scanning technologies.
Mapping the human brain synapse by synapse may seem like a daunting effort, but so did the Human Genome Project, an effort to map all human genes, when it was launched in 1991. Although the bulk of the human genetic code has still not been decoded, there is confidence at the nine American Genome Sequencing Centers that the task will be completed, if not by 2005, then at least within a few years of that target date. Recently, a new private venture with funding from Perkin-Elmer has announced plans to sequence the entire human genome by the year 2001. As I noted above, the pace of the human genome scan was extremely slow in its early years, and has picked up speed with improved technology, particularly computer programs that identify the useful genetic information. The researchers are counting on further improvements in their gene-hunting computer programs to meet their deadline. The same will be true of the human-brain-mapping project, as our methods of scanning and recording the 100 trillion neural connections pick up speed from the Law of Accelerating Returns.