Chances Are (48 page)

Read Chances Are Online

Authors: Michael Kaplan

BOOK: Chances Are
8.63Mb size Format: txt, pdf, ePub
Gibbs' vision was panoramic; his proposal of a universal, probabilistic relation between micro-states and macro-properties has proved extremely fruitful. Think of the ways we describe low-entropy states mechanically: as having steep energy gradients, or clear distinctions of position or velocity. In general, we are talking about ordered situations, where, instead of a uniform mass of particles moving randomly, we see a shape in the cloud, something worthy of a name.
A cup on a table has a distinct identity:
cup
. Let it fall on the floor, and it becomes
15 irregular shards of china, 278 fragments, dust, some heat, and a sharp noise
. The difference in length of description is significant. Claude Shannon (he of the roulette computer) worked both at MIT and Bell Labs on the problems of telephone networks. He saw, in his own domain, another physical process that never reversed: loss of meaning. The old joke tells how, in World War I, the whispered message from the front “Send reinforcements—we're going to advance,” passed back man to man, arrived at company headquarters as “Send three-and-fourpence; we're going to a dance.” All communications systems, from gossip to fiber optics, show a similar tendency toward degradation: every added process reduces the amount of meaning that can be carried by a given quantity of information.
Shannon's great contribution, contained in a paper written in 1948, is the idea that meaning is a statistical quality of a message. Shannon had realized that information, although sent as analog waves from radio masts or along telephone wires, could also be considered as particles: the “bits” that represented the minimum transmissible fact: yes or no, on or off, 1 or 0. A stream of information, therefore, was like a system of particles, with its own probabilities for order or disorder: 1111111111111 looks like a well-organized piece of information; 1001010011101 appears less so. One way to define this difference in degree of order is to imagine how you might further encode these messages. The first you could describe as “13 ones”; the second, without some yet higher-order coding system, is just thirteen damn things one after another; there's no way to say it using less information than the message itself.
Communication, therefore, has its own version of entropy—and Shannon showed it to be mathematically equivalent to Boltzmann's equation. From low-entropy epigram to high-entropy shaggy dog story, every meaning is associated with a minimum amount of information necessary to convey it, beyond which extra information is redundant, like energy not available for work.
The connection between meaning and energy, nonsense and entropy goes even deeper. In fact, the eventual solution to the paradox of Maxwell's demon was an understanding of their equivalence. The reasoning goes like this: for the demon to run its system of favoritism, accepting some particles and turning away others, it would have to store facts about these particles—which is in itself a physical process. Eventually, the demon would run out of space (since the system is finite) and would have to start to erase the data it held. Erasing data reduces the ratio of ordered to random information and so is a thermodynamically irreversible process: entropy increases. The perpetual motion machine remains impossible because its control system would absorb all the useful energy it generated.
The rules of the information system, the constraints within which its entropy tends to a maximum, are the conventions—the symbols, codes, and languages—through which we choose to communicate. These constraints can, themselves, have a great effect on the apparent order or meaning in a message.
For instance, Shannon showed how we can move from total gibberish (XFOML RXKHRJFFJUJ) to something that sounds like drunken Anglo-Saxon (IN NO IST LAT WHEY CRATICT FROURE) by requiring no more than that each group of three letters should reflect the statistical likelihood of their appearance together in written English. It takes only a few further statistical constraints on vocabulary, grammar, and style to specify the unique state of our language in our time. Shannon calculated the average entropy of written English to be 64 percent—that is, most messages could convey their meaning in a little more than a third their length. Other languages encode different degrees of randomness or redundancy; you can determine the language a document is written in using nothing more than a computer's file compression program. Since compressibility is itself a sensitive measure of information entropy, the average ratio between compressed and uncompressed file sizes for a given language is an instant statistical indentifier for that language.
David Ruelle suggests that this idea can be taken even further: since one important aspect of statistical mechanics is that the overall constraints on a system leave their mark on every part of it (if you make your boiler smaller or hotter, the pressure goes up everywhere within it), then
authorship
is also a constraint with statistical validity. Shakespeare's average entropy should not be the same as Bacon's; Virgil's concision is not Ovid's. Perhaps this explains why we seem to recognize the hand of the maker even in an unfamiliar work: we don't confuse a previously unseen van Gogh with a Gauguin; Bach is indisputably Bach within the first few bars; a glance distinguishes classical architecture from neoclassical. The judgment that leads a reader to recognize an author is not the conscious, point-by-point examination of the expert: it is a probabilistic decision based on observing a statistical distribution of qualities. An Israeli team recently produced a word-frequency test that claims to determine whether a given passage was written by a man or a woman—we wonder what it would make of this one.
Our technologies shape our analogies: as steam was the preoccupation of the nineteenth century, and telephones of the early twentieth, so computers provided a philosophical reference point for the late twentieth. Kolmogorov extended Shannon's information entropy into what is now called
algorithmic complexity:
taking the measure of randomness in a system, message, or idea by comparing the length of its expression with the length of the algorithm or computer program necessary to generate it. So, for instance, the decimal expansion of π, although unrepeating and unpredictable, is far from random, since its algorithm (circumference over diameter) is wonderfully concise. Most strings of numbers have far higher entropy—in fact, the probability that you can compress a randomly chosen string of binary digits by more than
k
places is 2
-
k
: so the chance of finding an algorithm more than ten digits shorter than the given number it generates is less than one in 1,024. Our universe has very little intrinsic meaning.
Kolmogorov's idea brings us back to the probabilistic nature of truth. What are we doing when we describe the world but creating an algorithm that will generate those aspects of its consistency and variety that catch our imagination? “Meaning,” “sense,” “interest,” are the statistical signatures of a few rare, low-entropy states in the universe's background murmur of information. Without the effort made (the energy injected) to squeeze out entropy and shape information into meaning (encoding experience in a shorter algorithm), the information would settle into its maximum entropy state, like steam fitting its boiler or a dowager expanding into her girdle. Life would lose its plot, becoming exactly what depressed teenagers describe it as: a pointless bunch of stuff.
So what we can
expect
from the world? Boltzmann showed that we can assume any physical system will be in the state that maximizes its entropy, because that is the state with by far the highest probability. Shannon's extension of entropy to information allows us to make the same assumptions about evidence, hypotheses, and theories: That, given the restraint of what we already know to be true, the explanation that assumes maximum entropy in everything we do not know is likely to be the best, because it is the most probable. Occam's razor is a special case of this: by forbidding unnecessary constructions, it says we should not invent order where no order is to be seen.
The assumption of maximum entropy can be a great help in probabilistic reasoning. Laplace happily assigned equal probabilities to competing hypotheses before testing them—to the annoyance of people like von Mises and Fisher. You will recall how, when we thought of applying Bayes' method to legal evidence, we tripped over the question of what our
prior
hypothesis should be—what should we believe before we see any facts? Maximum entropy provides the answer: we assume what takes the least information to cover what little we know. We assume that, beyond the few constraints we see in action, things are as they usually are: as random as they can comfortably be.
 
Slowly, by accretion, we are building up an answer to the quizzical Zulu who lurks within. Before, we had been willing to accept that probability dealt with uncertainty, but we were cautious about calling it a science. Now, we see that science itself, our method for casting whatever is out there into the clear, transmissible, falsifiable shape of mathematics, depends intimately on the concepts of probability. “Where is it?” is a question in probability; so are “How many are they?” “Who said that?” and “What does this mean?” Every time we associate a name or measure with a quality (rather than associating two mathematical concepts with each other) we are making a statement of probability. Some conclusions look more definite than others, simply because some states of affairs are more likely than others. As observers, we do not stand four-square surveying the ancient pyramids of certainty, we surf the curves of probability distributions.
Immanuel Kant's essential point (in glib simplification) is that reality is the medal stamped by the die of mind. We
sense
in terms of space and time, so we
reason
using the grammar imposed by that vocabulary: that is, we use mathematics. So if our sense of the world is probabilistic, does that also reflect an inescapable way of thinking? Are we, despite our certainties and our illusions, actually oddsmakers, progressing through life on a balance of probabilities? Should we really believe
that
?
When we've tried to believe it, we haven't always been very successful. The guilty secret of economics has long been the way people's behavior diverges from classical probability. From the days of Daniel Bernoulli, the discipline's fond hope has always been that economic agents—that's us—behave rationally, in such a way as to maximize subjective utility. Note that the terms have already become “utility” and “subjective”; money is not everything. Nevertheless, we are assumed to trade in hope and expectation, balancing probability against payoff, compounding past and discounting future benefits. In the world's casino—this palace of danger and pleasure we leave only at death—we place our different wagers, each at his chosen table: risk for reward, surplus for barter, work for pay (or for its intrinsic interest, or for the respect of our peers). Utility is the personal currency in which we calculate our balance of credit and debit with the world: loss, labor, injury, sadness, poverty are all somehow mutually convertible, and the risks they represent can be measured collectively against a similarly wide range of good things. Thanks to von Neumann and Morgenstern, economists have the mathematical tools to track the transfer of value around this system—the satisfaction of altruism, for instance, is as much part of the equation as the lust for gold. No one is merely a spectator at the tables: currency trader or nurse, burglar or philanthropist, we are all players.
And yet we don't seem to understand the rules very well. One of von Neumann's RAND colleagues, Merrill Flood, indulged himself by proposing a little game to his secretary: he would offer her $100 right away—or $150 on the condition that she could agree how to split the larger sum with a colleague from the typing pool. The two women came back almost immediately, having agreed to split the money evenly, $75 each; Flood was not just puzzled, he was almost annoyed. Game theory made clear what the solution should be: the secretary should have arranged to pass on as little as she thought she could get away with. The colleague, given that the choice was something against nothing, should have accepted any sum that seemed worth the effort of looking up from her typewriter. Yet here they came with their simplistic equal division, where the secretary was actually worse off than if she had simply accepted the $100.
The secretaries were not atypical: every further study of these sharing games shows a greater instinct for equitable division—and a far greater outrage at apparent unfairness—than a straightforward calculation of maximum utility would predict. Only two groups of participants behave as the theory suggest, passing on as little as possible: computers and the autistic. It seems that fairness has a separate dynamic in our minds, entirely apart from material calculations of gain and loss. Which makes it ironic that communism, the political system devised to impose fairness, did it in the name of materialism alone.
Nor is this the only test in which
Homo sapiens
behaves very differently from
Homo economicus
. The relatively new windows into the working brain—electroencephalography, positron emission tomography, functional magnetic resonance imaging—reveal how far we are from Adam Smith's world of sleepless self-interest. For example, we willingly take more risks if the same probability calculation is presented as gambling than as insurance. We seem to make very different assessments of future risk or benefit in different situations, generally inflating future pain and discounting pleasure. Our capacity for rational mental effort is limited: people rarely think more than two strategic moves ahead, and even the most praiseworthy self-discipline can give out suddenly (like the peasant in the Russian story who, after having resisted the temptation of every tavern in the village, gave in at the last saying, “Well, Vanka—as you've been
so
good…”). Emotion, not logic, drives many of our decisions— often driving them right off the road: for every impulsive wastrel there is a compulsive miser; we veer alike into fecklessness and anxiety.

Other books

Rutherford Park by Elizabeth Cooke
Strangers by Mary Anna Evans
Martha Schroeder by Lady Megs Gamble
Ejecta by William C. Dietz
Lying in Wait (9780061747168) by Jance, Judith A.
Love Ties by Em Petrova
Excesión by Iain M. Banks