The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning (6 page)

BOOK: The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning
11.37Mb size Format: txt, pdf, ePub
Consequently, my red cannot be your blue because there is no single, independent class of experience as “red.” The truth, instead, is that all examples of “what it is like” that you care to pick, from “burgundy” to “melancholy,” represent rich information about ourselves and the outside world unique to this moment, crucially not in isolation, but as a network of links between many strands of knowledge, and in comparison with all the other myriad forms of experience we are capable of. In this way, far from being irrelevant, our senses and feelings, although undeniably complex, serve a vital computational role in helping us understand and interact with the world.
3
CAN A LAPTOP REALLY UNDERSTAND CHINESE?
 
The most famous defense of the idea that there is something special and nonprogrammable about our biological form of consciousness is the Chinese Room argument, first proposed by John Searle in 1980. The main purpose of this thought experiment was to demonstrate the impenetrability not of feeling, but of meaning. Searle was keen to prove that a human brain could not be reduced to a set of computer instructions or rules.
To describe the thought experiment, we need to turn to another gang of philosophers from the year 2412, Turing’s Nemesis. Restless and rebellious, these philosophers are prowling the streets of New York with an aggressive itch for a dialectic fight. Soon, they come across a group of Chinese tourists and decide to play a mischievous trick on them. They show the Chinese group a plain white room, which is entirely empty, except for a table, a chair, blank pieces of paper, and a pencil. They allow the Chinese to inspect every nook and cranny of the simple space. The only features to note, apart from a door and a naked lightbulb in the ceiling, are two letterboxes on either side of the windowless room, linking it with the outside world. One box is labeled IN and the other OUT.
The ringleader of Turing’s Nemesis, a thoroughly devious person, melodramatically explains to the Chinese group that he reveres their culture above all others and believes everyone else in the world does, too. In fact, he’s willing to bet a month’s wages that these Chinese people can pick any random sucker on the street, place him in this room, and that person will show that he worships their culture as well, because he will be able to fluently speak their language via the interchange of words on paper through the letterboxes. The exchanges will take place with the Chinese people on the outside and the random subject inside the room. The Chinese are quick to take up this bet (even in 2412, although quite a few non-Chinese people speak Mandarin, only a small proportion can write in the language).
The Chinese tourists take their time and pick a young Caucasian man. He does not seem particularly bright. He looks a little bewildered as they stop him on the street and pull him over. The ringleader of the philosophy gang accepts the man and helps him into the room. Out of sight, though, just as the ringleader shuts the door, he hands the man a thick book. He whispers to him that if he follows the simple guidelines in the book, there’s a week’s worth of wages in it for just a few hours of work. This book is effectively a series of conversion tables, with clear instructions for how to turn any combination of Chinese characters into another set of Chinese characters.
The man in the room then spends the next few hours accepting pieces of paper with Chinese writing through the IN box. The paper has fresh Chinese sentences from a member of the Chinese group outside. Each time the man trapped in the room receives a piece of paper, he looks up the squiggles in the book, and then converts these squiggles into other squiggles, according to the rules of the book. He then puts what he’s written into the OUT box—as instructed. He is so ignorant that he doesn’t even know he’s dealing in Chinese characters; nevertheless, every time he sends them his new piece of paper, the Chinese are amazed that the answer is articulate and grammatically perfect, as if he were a native Mandarin speaker. Though the young man does not know it, he is sending back entirely coherent, even erudite, answers to their questions. It appears to the Chinese that they are having a conversation with him. The Chinese observe in virtual shock that he seems, totally contrary to first impressions, rather charming and intelligent. Amazed and impressed, the Chinese reluctantly pay the bet and walk away, at least able to take home the consolation of a glow of pride at the universal popularity of their culture.
With the Chinese group out of the way, the Turing’s Nemesis philosophers decide to keep their human guinea pig locked in the room a couple of hours longer. One of the Turing’s Nemesis members does in fact speak and read Chinese, and he translates each of the paper questions originally asked of the man in the room into English. He sends each question into the room in turn. The written answers, this time in English, come quite a bit faster. Although they aren’t nearly as well articulated as they were in Mandarin, they are somewhat similar to the Mandarin responses he had copied from the book. This time, however, the man actually understands everything that’s asked of him, and understands every answer he gives.
Now, claims the Chinese Room argument, if the mind were merely a program, with all its “if this, then that” rules and whatnot, it could be represented by this special book. The book contains all the rules of how a human would understand and speak Mandarin, as if a real person were in the room. But absolutely nowhere in this special room is there consciousness or even meaning, at least where Mandarin is concerned. The main controller in the room, the young man, has absolutely no understanding of Chinese—he’s just manipulating symbols according to rules. And the book itself cannot be said to be conscious—it’s only a book after all, and without someone to carry out the rules and words in the book, how can the book have any meaning? Imagine if almost all life on the planet went extinct, but somehow this book survived. On its own, without anyone to read it, it’s a meaningless physical artifact.
The point of all this is that, when the rules of the book are used to write Chinese, there is no consciousness or meaning in the room, but when English is written later on, and a human is involved, there
is
consciousness and meaning. The difference, according to Searle, is that the book is a collection of rules, but there is something greater in the human that gives us consciousness and meaning. Therefore meaning, and ultimately consciousness, are not simply programs or sets of rules—something more is required, something mysteriously unique to our organic brains, which mere silicon chips could never capture. And so no computer will ever have the capacity for consciousness and true meaning—only brains are capable of this, not as biological computers, with our minds as the software, but something altogether more alien. Searle summarized this argument by stating that “syntax is not semantics.”
This argument—like all of the others I’ve described—may appear to be an unbreakable diamond of deductive reasoning, but it is in fact merely an appeal to our intuitions. Searle wants, even begs, us to be dismissive of the idea that some small book of rules could contain meaning and awareness. He enhances his plea by including the controller in the room, who is blindly manipulating the written characters even though he is otherwise entirely conscious. Perhaps most of us would indeed agree that intuitively there is no meaning or awareness of Mandarin anywhere in that room. But that’s our gut feeling, not anything concrete or convincing.
When you start examining the details, however, you find the analogy has flaws. It turns out that there are two tricks to this Chinese Room thought experiment that Searle has used, like a good magician, to lead our attention away from his sleight of hand.
The first, more obviously misleading feature is the fact that a fully aware man is in the room. He understands the English instructions and is conscious of everything else around him, but he is painfully ignorant of the critical details of the thought experiment—namely, the meaning of the Chinese characters he is receiving and posting. So we center our attention on the man’s ignorance, and automatically extend this specific void of awareness to the whole room. In actual fact, the man is an irrelevance to the question. He is performing what in a brain would not necessarily be the conscious roles anyway—that of the first stages of sensory input and the last aspects of motor output. If there is absolutely any understanding or meaning to be found in that room for the Mandarin characters, it is in the rules of that book and not in the man. The man could easily be replaced by some utterly dumb, definitely nonconscious robot that feeds the input to the book, or a computerized equivalent of it, and takes the output to the OUT slot. So let’s leave the human in the room out of the argument and move on to the second trick.
And to understand the second trick, we must pose a fundamental question: Does that book understand Mandarin Chinese or not?
THE MOST COMPLEX OBJECT IN THE KNOWN UNIVERSE
 
The answer to this question may seem simple. Our intuition tells us that there cannot be any consciousness or meaning in this special room because the small book is a simple object. How could one slim paperback actually be aware? But the thought experiment’s second slippery trick is to play with the idea that something as incredibly sophisticated and involved as language production could possibly be contained in a few hundred pages. It cannot, and as soon as you start trying to make the thought experiment remotely realistic, the complexity of the book (or any other rule-following device, such as a computer) increases exponentially, along with our belief that the device could, after all, understand the Chinese characters.
Let’s say, for simplicity’s sake, that we limit our book to a vocabulary of 10,000 Mandarin words, and sentences to no longer than 20 words. The book is a simple list of statements of the form: “If the input is sentence X, then the output is sentence Y.” We could be mean here. Let’s assume that the Chinese people outside the room are getting increasingly desperate not to lose their bet. One of them actually thinks he half-spotted the Turing’s Nemesis ringleader slip some kind of book to the guy in the room. Another member of the Chinese group happens to be a technology history buff and has played on a few clever computer simulations of human text chatters from the early twenty-first-century Turing Test competitions.
4
He suggests a devious strategy—that they start coming up with any old combination of sequences varying in length from 1 to 20 words, totally ignoring grammar and meaning, to try to trick the person inside the room into silence. How big would the book have to be to cope with all the possibilities? The book would have to contain around 10
80
different pairs of sentences.
5
If we assume it’s an old-fashioned paper book, then it would have to be considerably wider than the diameter of our known universe—so fitting it into the room would be quite a tight squeeze! There is also the issue of the physical matter needed to make up this weighty tome. The number of pairs of sentences happens to equal the number of atoms in the universe, so the printer of the book would be running out of paper very early on even with the first copy! Obviously, it would be hopelessly unrealistic to make any kind of book that not only contained every possible sequence of up to 20 words, but also connected each sequence as a possible question to another as the designated answer. And even if the book were to be replaced by a computer that also performed this storage and mapping, the computer engineers would find there was simply not enough matter in the universe to build its hard disk.
Let’s try to move toward a more realistic book, or more practically, a computer program, that would employ a swath of extremely useful shortcuts to convert input to coherent output, as we do whenever we chat with each other. In fact, just for kicks, let’s make a truly realistic program, based exactly on the human brain. It might appear that this is overkill, given that we are only interested in the language system, but our ability to communicate linguistically is a skill dependent on a very broad range of cognitive skills.
Although almost all neuroscientists assume that the brain is a kind of computer, they recognize that it functions in a fundamentally different way from the PCs on our desks. The two main distinctions are whether an event has a single cause and effect (essentially a serial architecture), or many causes and effects (a parallel architecture), and whether an event will necessarily cause another (a deterministic framework), or just make it likely that the next event will happen (a probabilistic framework). A simple illustration of a serial deterministic architecture is a single line of dominoes, all very close together. When you flick the first domino, it is certain that it will push the next domino down, and so on, until all dominoes in the row have fallen. In contrast, for a parallel probabilistic architecture, imagine a huge jumble of vertically placed dominoes on the floor. One domino falling down may cause three others to fall, but some dominoes are spaced such that they will only touch another domino when they drop to the ground, which leaves the next domino tottering, possibly falling down and initiating more dominoes to drop, but not necessarily.
Although modern computers are slowly introducing rudimentary parallel features, traditionally, at least, a PC works almost entirely in a serial manner, with one calculation leading to the next, and so on. In addition, it’s critical that a computer chip functions in a deterministic way—if this happens, then that
has to
happen. Human brains are strikingly different: Our neurons are wired to function in a massively parallel way. The vast majority of our neurons are also probabilistic: If this neuron sends an output to many other neurons, then that merely makes it more (or sometimes less) likely that these subsequent neurons will be activated, or “fire.”

Other books

The New World by Stackpole, Michael A.
Knot Guilty by Betty Hechtman
Los hombres lloran solos by José María Gironella
Losers Take All by David Klass
Mike Stellar by K. A. Holt
Born of Fire by Edwards, Hailey
Her Dad's Friend by Penny Wylder
The Waking Dreamer by J. E. Alexander
The Child Bride by Cathy Glass
Ain't No Wifey by J., Jahquel