Labyrinths of Reason (31 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
8.38Mb size Format: txt, pdf, ePub

In the first digital computers, the logic gates were vacuum tubes and the connections were wires. Later, the vacuum tubes gave way to transistors. Currently, powerful processors fit on a single chip. Most interconnections are printed circuits of thin metallic film.

No one really knows how small a processor or a logic gate could be. Experimental gates use films that are only several atoms thick. There are promising technologies that have yet to be exploited. Stockmeyer and Meyer were wildly optimistic in their thought experiment. They postulated that, somehow, it is possible to construct computer components as small as protons. As it stands now, protons and neutrons are the ultimate in measurable smallness. So however small the components in an “ideal” computer might be, they cannot be any smaller than 10
15
meters across (negative exponents are fractions: that is 1 divided by 10
15
, or a trillionth of a millimeter).

Assume that the proton-size components may be packed like sardines. Then any given volume can hold as many components as it could hold ideal spheres 10
-15
meters in diameter. A computer the size of an ordinary personal computer (which has a volume of maybe a tenth of a cubic meter) could contain about 10
44
distinct components. A minicomputer with a volume of a cubic meter would contain 10
45
.

Another all-important factor in computer technology is speed. One bottleneck is the time it takes for a component such as a logic gate to switch from one state to another. The speed of light is the fastest that any form of information may be transmitted. At best, then, a component cannot switch faster than the time it takes light to cross it. If it did, one side of the component would “know” what was happening elsewhere faster than is permitted by relativity.

Light takes 3×10
-
24
seconds to cross the diameter of a proton. In Stockmeyer and Meyer’s analysis, that was taken to be the switching speed of the components in the ideal computer.

In reality, computer speed also depends on how the components are interconnected and how well the available resources are marshaled for the problem at hand. Most present-day computers are serial, meaning that they do one thing at a time. At any instant, the computer is at one point in its algorithm. Much faster, potentially, are parallel-processing computers. Parallel computers contain many processors and split tasks among them. Most of the time, a parallel computer is doing many things at once.

Since we’re not skimping, assume that the ideal computer implements an ultra-sophisticated parallel-processing scheme. Every proton-size component is a distinct processor, and all are linked in some Connection Machine-like scheme that ensures relatively direct connections even when the number of processors is astronomical.

The computer splits its task among its processors by assigning each a distinct subset of the current list of beliefs. Let each processor be able to compare a new belief against its present subset
instantly
. It can determine whether there is a contradiction and fetch a new subset to test in its switching speed of 3×10
×24
seconds (call it an even 10
-23
to simplify the math). Then each processor can run through 10
23
logical tests in a single second. And there are 10
45
processors in a cubic-meter computer. The computer should be able to handle 10
68
tests per second.

That’s fast. It is so fast that, in the first second, the computer could do all the necessary comparisons to build the list up to 225 beliefs.

And then, suddenly, things would slow down. It would take a second to add the 226th belief; two seconds to approve a 227th belief; about a minute to check the 232nd. The computer would be working as fast as ever, but the number of tests doubles as each belief is added to the list. It would take over a month to approve the 250th belief. Expanding the list to 300 would take—gulp!—
38
million years
.

Okay, but this is a thought experiment, and we have all the time in the world. The age of the universe is estimated to be about 10 billion years. That’s between 10
17
and 10
18
seconds old. Add another order of magnitude or two (10
19
seconds), and you have a decent approximation to “forever.” By the time the universe is ten times older than it is now, virtually all the stars will have burned out, and life will probably be extinct. So 10
19
seconds is about the longest amount of time that it makes any sense to talk about. It follows that if an ideal computer with 10
45
processors worked from the beginning of time to its end, it could check the stupendous number of 10
19
times 10
68
subsets against new beliefs. That’s 10
87
. That’s enough to take us up to a list of 289 beliefs.

We need a more powerful computer. Once the rock-bottom size of components has been reached, computers must get larger to get more powerful. Let the computer expand beyond the confines of a room, or a house … or a county or continent. However big it got, the ultimate limit would be the size of the universe.

The distance of the most remote quasar currently known is estimated at 12 to 14 billion light-years. If the universe is finite, a generous estimate of its “diameter” might be 100 billion light-years. A light-year is just under 10
13
kilometers, or 10
16
meters. That makes the diameter of the universe something like 10
27
meters and its volume about 10
81
cubic meters.

Therefore, a computer as big as the universe could contain 10
45
times 10
81
components the size of protons. That comes to 10
126
components. Call this pipe dream absurd; the point is this: No matter what technical advances may await, no computer will ever be made of more than 10
126
parts. No brain, no physical entity of any kind, could have any more parts. That is one limit we must live with. And if this computer works from the beginning of time to the end, it could execute at most 10
126
times 10
42
fundamental operations—10
168
in all.

This 10
168
is an absolute limit on how many times you could do
anything
. It’s the closest thing to a supertask there is. There isn’t enough time
or
enough space to allow more than 10
168
of anything. And unfortunately, running 10
168
logical tests still doesn’t get us very far. The computer would conk out after extending the list to about 558 beliefs.

We can at most know 558 things?! No, of course not. We know many things through simple deductions, syllogisms, and sorites. The number 558 is the rough limit for beliefs logically complex enough to require an exponential-time-checking algorithm. A set of
558 beliefs as “unruly” as those in Carroll’s pork-chop problem would probably exceed the computing power of even a computer as big as the universe. That is why new paradoxes continue to be invented.

Logically complex beliefs are not rare or unnatural. Even those beliefs we idealize as simple (like “All ravens are black”) are actually qualified with a battery of auxiliary hypotheses. The difficulty of SATISFIABILITY speaks of more than logic puzzles.

When we cannot even tell if our more complex beliefs contain a contradiction, we don’t fully understand them. We certainly can’t deduce all that may follow from those beliefs. If you think of logical deduction as a kind of vision through which we see the world, then that vision is limited. Sorites, chains of simple deductions, are our principal lines of sight. Through them we peer far into the murk. Our vision for more complex deductions is extremely nearsighted. We don’t see everything, not even everything implicit in our experience. There are things going on out there that we will never appreciate.

It is not even that we are too feebleminded to understand all that we miss. If we met up with the omniscient being who keeps popping up in these paradoxes, he would be able to show us what we are missing, and we could convince ourselves it was true. The answer to a puzzle is simple once you see it.

The “we” here includes humans, computers, extraterrestrial beings, and any physical agency. NP problems are hard for all. Stockmeyer and Meyer’s thought experiment is an information-age counterpart to Olbers’s paradox. From the fact that we see stars in the sky—from the fact that the entire universe is
not
a computer—we can be certain that no one in the universe knows everything.

PART THREE

T
HE VOYNICH MANUSCRIPT is a very old, 232-page illuminated book written entirely in a cipher that has never been decoded. Its author, subject matter, and meaning are unfathomed mysteries. No one even knows what language the text would be in if you deciphered it. Fanciful pictures of nude women, peculiar inventions, and nonexistent flora and fauna tantalize the would-be decipherer. Color sketches in the exacting style of a medieval herbal depict blossoms and spices that never sprang from earth and constellations found in no sky. Plans for weird, otherworldly plumbing show nymphets frolicking in sitz baths connected with branching elbow-macaroni pipes. The manuscript has the eerie quality of a perfectly sensible book from an alternate universe. Do the pictures illustrate topics in the text, or are they camouflage? No one knows.

A letter written in 1666 claims that Holy Roman Emperor Rudolf II of Bohemia (1552–1612) bought the manuscript for 600 gold ducats. He may have bought it from Dr. John Dee, a glib astrologer and mathematician who traveled on the winds of fortune from one royal court to another. Rudolf thought the manuscript was written by the English monk and philosopher Roger Bacon (c. 1220–92).

Bacon was as good a guess as any. As “Doctor Mirabilis” he had become a semi-mythic figure in the generations after his death, part scholar and part sorcerer. Bacon was a collector of arcane books. He knew about gunpowder and hinted in his writings that he knew about other things he wasn’t ready to make public. At the time of his death, Bacon’s works were considered so dangerous that (according to romantic conceit) they were nailed to the wall of Oxford’s library to molder in the wind and rain.

The Voynich manuscript is said to have languished for a long time at the Jesuit College of Mondragone in Frascati, Italy. Then in 1912 it was purchased by Wilfred M. Voynich, a Polish-born scientist and bibliophile. Voynich was the son-in-law of George Boole, the logician, and husband of Ethel Lillian Voynich, one of the best-known English writers in the Soviet Union and China (for
The Gadfly
, a revolutionary novel long forgotten in the West). Lacking any intelligible title, the manuscript took on Voynich’s name. Voynich brought it to America, where it was intensively studied. Scholars and crackpots have analyzed, then forgotten about, the Voynich manuscript in several cycles over the past seventy-five years. The manuscript is now at Yale University’s Beinecke Rare Book and Manuscript Library.

The manuscript’s cipher is no ordinary one. If it had been, it would have been cracked long ago. The cipher does not use Roman or any other conventional letters or symbols. It is not mirror-image writing or any simple distortion of familiar letters. The cipher employs approximately twenty-one curlicued symbols that loosely suggest some Middle Eastern scripts. Of course, the symbols aren’t from any known Middle Eastern alphabet. Some symbols are joined together like slurred musical notes. A few symbols appear rarely—or maybe they are sloppy variants of the others. The writing forms “words” with spaces between them.

The diagram shows the commoner Voynich symbols labeled according to a scheme used by physicist William Ralph Bennett, Jr., who has subjected the manuscript to computer analysis. Bennett’s letters (shown below each Voynich symbol) are arbitrary and serve only to name the symbols and allow computer entry.

Folio 79 Verso from the Voynich manuscript

Other books

A Tale of Two Castles by Gail Carson Levine
Board Stiff (Xanth) by Anthony, Piers
Primal Force by D. D. Ayres
Summer at Shell Cottage by Lucy Diamond
Heart of the Desert by Carol Marinelli
Angel at Troublesome Creek by Ballard, Mignon F.