Labyrinths of Reason (39 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
8.12Mb size Format: txt, pdf, ePub

These instructions haven’t gotten us anywhere yet. Who knows how much more complicated is the part of the instructions telling how to
think
about foxes?

We do not have Searle’s algorithm for understanding Chinese, but we do have simpler ones. A very naïve person, who has never seen a pocket calculator before, might get the woefully wrong idea that it can think. You could disabuse him of this notion in a Searle experiment. Give him the specs for the microprocessor used in the calculator, and a wiring diagram, and specify the electrical inputs that would result from punching out a problem on the calculator’s keys. Let him run through the action of the microprocessor as it does some math. The human-simulating-microprocessor would produce the correct results, but would have no idea of the actual mathematical operation being carried out. He wouldn’t know whether he was adding 2 plus 2 or taking the hyperbolic cosine of 14.881 degrees. The person would feel no consciousness of the abstract mathematical operation, and neither would the pocket calculator. If anyone tried to argue the systems reply, you could have the subject memorize everything and do it in his head. Right?

Don’t be so sure. A calculator may run through thousands of machine steps to perform a simple calculation. The experiment would probably take
hours
. Unless the subject had a phenomenal memory, it would be impossible for him to do the microprocessor simulation in his head. He would almost certainly forget some intermediate results along the way and ruin everything.

Now consider the plight of Searle’s subject. The book of instructions must be very big indeed! It must be far, far bigger than any room on earth.

Since no one has devised an algorithm for manipulating Chinese characters able to “answer” questions intelligently, it is impossible to say how big or complex the algorithm would be. But given that the algorithm must simulate human intelligence, it is reasonable to think that it cannot be much less complex than the human brain.

It is conceivable that each of 100 billion neurons plays some part in actual or potential mental process. You might expect, then, that the instructions for manipulating Chinese symbols as a human does would have to involve at least 100 billion distinct instructions. If there is one instruction per page, that would mean 100 billion pages. So the “book”
What to Do If They Shove Chinese Writing Under the
Door
would more realistically be something like 100 million volumes of a thousand pages each. That’s approximately a hundred times the amount of printed matter in the New York City library. This figure may be off by a few factors of 10, but it is evident that there is no way anyone could memorize the instructions. Nor could they avoid using scratch paper, or better, a massive filing system.

It’s not just a matter of the algorithm
happening
to be impractically bulky. The Chinese algorithm encapsulates much of the human thought process, including a basic stock of common knowledge (such as how people act in restaurants). Can a human brain memorize something that is as complex as a human brain? Of course not. You cannot do it any more than you can eat something that is bigger than you are.

By the same token, you’ve probably seen statistics like “The average American eats an entire cow every six months.” A cow is bigger than a person, but the statistical beef eater consumes a little bit of a cow at a time. There is never very much of a cow inside you at any time. So it would be with Searle’s subject.

Since the brain is made of physical stuff and stores memories as chemical and electrical states of this physical stuff, it has a finite capacity to remember things. It is not clear how much of the brain is available for storing memories, but certainly not all of it, and perhaps only a small fraction. Other parts of the brain have to be available to manipulate memories, process new sensory information, etc.

Evidently, all the variations on the thought experiment (from Searle and his critics) that have the subject memorizing the rules are misleading. It is impossible for the person to memorize anything more than a tiny fraction of the complete algorithm. He must constantly refer to the instructions and to his scratch sheets/filing system. Frequently the instructions will refer him to a certain scratch sheet, and he will look at it and shake his head: “Gee! I can’t even remember writing this!” Or he will turn to a page in the instructions, see a coffee ring, and know that he has consulted the page before but not remember it.

The human is, ultimately, a very small part of the total process. He is like a directory assistance operator who looks up thousands of phone numbers every day but cannot remember them a few moments after reciting them. The information about phone numbers is practically all in the phone books; and in Searle’s experiment, the algorithm exists mostly in the instructions and scratch sheets and
hardly at all in the human or the tiny fraction of the instructions he remembers at the moment.

That the human in the room is a conscious being is irrelevant and rather disingenuous. He could be replaced by a robot (not a slick, science-fictional robot with artificial intelligence; just a device, maybe a little more complicated than a mechanical fortune-teller). The fact that the human fails to experience a second consciousness is no more significant than that Volume 441,095 of the instructions does.

This explains the human’s denial of understanding Chinese. It is less satisfying in saying where and how the consciousness exists in the process. We want to point to the scratch paper, instructions, and so on, and say, “The consciousness is right over there by that filing cabinet.” About all we can do is to postulate that we are failing to see the forest for the trees. We are like the man inside Cole’s giant water drop who can see nothing wet.

The Chinese room is dilated in time more than in space. Imagine we have a time machine that can accelerate the Chinese room a trillionfold or so.
Then
the pages of the instructions would be a blur. Stacks of scratch paper would appear to grow organically. The human, moving too fast to see, would be a ghost in the machine. Possibly, part of our concept of consciousness requires that things be happening too fast to keep up with them.

A Conversation with Einstein’s Brain

Douglas Hofstadter devised a thought experiment (1981) in which the exact state of Einstein’s brain at death is recorded in a book, along with instructions for simulating its operation. By carefully applying the instructions, you can have a (very slow) posthumous conversation with Einstein. The responses thus derived are exactly what Einstein would have said. You have to address the book as “Albert Einstein” and not a book, because it “thinks” it is Einstein!

Hofstadter’s thought experiment neatly splits the presumed consciousness into information (the book) and process (the person following the book’s instructions). Everything that makes the book Einstein is in the book. But the book, sitting on a shelf, patently has no more consciousness than any other book. This leads to an ingenious set of riddles on the “mortality” of Searle simulations.

Suppose that someone patiently applies the rules in the book at the rate of so many instructions per day. Einstein’s consciousness is, or seems to be, re-created. After a while the human replaces the
book on the shelf and takes a two-week vacation. Is the book “Einstein” dead?

Well, the book could no more “notice” the hiatus than we could detect it if time stopped. To the “Einstein” of the book, the person is analogous to the physical laws that keep our brains ticking.

What if the person carrying out the instructions slowed down to one instruction a year? Is that enough to keep the book “alive”? What about one instruction a century? What if the interval between instructions doubles each time?

F
EW CONCEPTS are more inherently paradoxical than omniscience. Most cultures believe in a superior being or beings with total knowledge. Yet omniscience readily leads to contradiction. In part the trouble is that there is something suspect about utter perfection in
anything
. At the very least, omniscience, if it exists, has some unexpected properties.

The most dazzling of paradoxes of omniscience is of recent vintage (1960). Devised by physicist William A. Newcomb, the paradox has spurred almost unprecedented interest in the scientific community.
(The Journal of Philosophy
called it “Newcombmania.”) Besides exploring the issues of knowledge and prediction, Newcomb’s paradox offers a new twist on that philosophical standard, free will.

Before getting to Newcomb’s paradox, it will be instructive to approach it via two simpler but related situations from game theory, the abstract study of conflict.

The Paradox of Omniscience

The “paradox of omniscience” shows that being all-knowing can be to your disadvantage. The paradox is described in the context of a deadly diversion of game theorists and 1950s teenagers called “chicken.” This is the adolescent dare game in which two drivers race toward each other in a collision course. You are in the driver’s seat of a car traveling at high speed in the middle of a deserted highway. Your opponent is in an identical car, traveling at the same speed toward you. If neither of you veers to the side, both will crash and die. Neither of you wants that. What you
really
want is to show your machismo by not swerving—and having your opponent swerve (lest you both get clobbered). Failing that, there are two intermediate scenarios. It wouldn’t be so bad if both you and your opponent chickened out. At least you’d survive, and wouldn’t suffer the humiliation of being the one who flinched while his opponent kept his cool. Of course, even the latter would be better than instant death in a head-on collision.

In game theory, chicken is interesting because it is one of a few fundamental situations in which the best course is not immediately apparent. When the game is played between ordinary mortals, each driver’s situation is identical. In the long run, the best either person can do is to chicken out, in the hope that the opponent will be smart enough to do the same. If one driver does
not
swerve, the opponent will be angry and may not swerve the next time, with dire consequences for both. In short, no chicken player reaches middle age except by being a consistent coward.

Now imagine playing chicken with an omniscient opponent. The other driver is gifted with infallible ESP. He can and does anticipate your moves with perfect accuracy. (You are still an ordinary mortal.) “Oh-oh!” you think. “The whole point of chicken is guessing what the other guy will do. I’m in big trouble!”

Then you mull over your predicament a bit and realize that you have an unbeatable advantage. It is foolish to swerve with an all-knowing opponent. He will predict your swerve and thus won’t swerve himself—resulting in complete failure for yourself.

Your best course is not to swerve. Anticipating
that
, Mr. Know-it-all has only two options: to swerve and survive (albeit with humiliation)
or not to swerve and die. Provided he is rational and does not want to die, he can only swerve. Consequently,
the omniscient player is at a disadvantage
.

The paradox of omniscience is merely of the “common sense is wrong” variety. Surprising as its conclusion may be, it is valid and not like the contestable reasoning of the prisoner in the unexpected hanging. The omniscient driver cannot negotiate his way out of the disadvantage either. Allow the drivers a tête-à-tête before the game. The omniscient driver can take one of two bargaining positions:

1. “Make my day.” He can play tough by threatening to swerve if and only if you swerve.

2. “Look at the long term.” He can appeal to your sense of wisdom (or your knowledge of game theory): “Sure, you might get away with not swerving this time. But look at the long term. The only course that works in the long run is for both of us to swerve.”

The first strategy’s threat has no teeth. The omniscient driver can bluster all he likes, but if he foresees that you
aren’t
going to swerve, would he really
not
swerve and kill himself? Not if he isn’t suicidal.
1
The second strategy, which appears to be 180 degrees removed from the first, falls victim to the same counterstrategy. You still need only resolve not to swerve to create a swerve-or-die situation for the omniscient driver.

Situations like chicken (and implicit paradoxes of omniscience) occur frequently in the Old Testament. Adam, Eve, Cain, Saul, and Moses challenged an omniscient God, who had told them that disobedience, while pleasurable in the short run, would be ruinous in the long run. The paradox is weakened by the fact that the omniscient deity is also all-powerful and can presumably overcome any disadvantage deriving from His omniscience.

Even today chicken is being played all the time. Game theorists suggested chicken as a metaphor for the 1962 Cuban missile crisis, with the United States and the Soviet Union as the players. In geopolitical contexts, the paradox of omniscience calls the value of espionage into question. An all-knowing nation may be at a disadvantage in some situations (note that the paradox does not say that omniscience is disadvantageous in
all
situations). For the paradox to apply, nation A must have such a vast network of spies that it
can learn of every high-level decision in nation B. Nation B must be aware that it is hopelessly riddled with moles and cannot keep a decision secret from nation A. (The nonomniscient player must always be aware of the opponent’s omniscience for the paradox to apply.) Ironically, the latter requirement may prevent the paradox from occurring much in the real world: Few governments are willing to acknowledge their security leaks.

Other books

13 Curses by Michelle Harrison
Syndicate's Pawns by Davila LeBlanc
Breathe by Kristen Ashley
The Curse by Harold Robbins
Child of Darkness by V. C. Andrews
Sealed with a promise by Mary Margret Daughtridge
Dread Brass Shadows by Glen Cook