Everything Is Obvious (6 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
7.83Mb size Format: txt, pdf, ePub

Critics of homo economicus have raised all these objections,
and many more, over the years. In response, advocates of what is often called rational choice theory have expanded the scope of what is considered rational behavior dramatically to include not just self-interested economic behavior, but also more realistic social and political behavior as well.
4
These days, in fact, rational choice theory is not so much a single theory at all as it is a family of theories that make often rather different assumptions depending on the application in question. Nevertheless, all such theories tend to include variations on two fundamental insights—first, that people have preferences for some outcomes over others; and second, that given these preferences they select among the means available to them as best they can to realize the outcomes that they prefer. To take a simple example, if my preference for ice cream exceeds my preference for the money I have in my pocket, and there is an available course of action that allows me to exchange my money for the ice cream, then that’s what I’ll choose to do. But if, for example, the weather is cold, or the ice cream is expensive, my preferred course of action may instead be to keep the money for a sunnier day. Similarly, if buying the ice cream requires a lengthy detour, my preference to get where I am going may also cause me to wait for another time. Regardless of what I end up choosing—the money, the ice cream, the walk followed by the ice cream, or some other alternative—I am always doing what is “best” for me, given the preferences I have at the time I make the decision.

What is so appealing about this way of thinking is its implication that
all
human behavior can be understood in terms of individuals’ attempts to satisfy their preferences. I watch TV shows because I enjoy the experience enough to devote the time to them rather than doing something else. I vote because I care about participating in politics, and when I vote,
I choose the candidate I think will best serve my interests. I apply to the colleges that I think I can get into, and of those I get accepted to, I attend the one that offers the best combination of status, financial aid, and student life. When I get there, I study what is most interesting to me, and when I graduate, I take the best job I can get. I make friends with people I like, and keep those friends whose company I continue to enjoy. I get married when the benefits of stability and security outweigh the excitement of dating. We have children when the benefits of a family (the joy of having children who we can love unconditionally, as well as having someone to care for us in our old age) outweigh the costs of increased responsibility, diminished freedom, and extra mouths to feed.
5

In
Freakonomics
, Steven Levitt and Stephen Dubner illustrate the explanatory power of rational choice theory in a series of stories about initially puzzling behavior that, upon closer examination, turns out to be perfectly rational. You might think, for example, that because your real estate agent works on commission, she will try to get you the highest price possible for your house. But as it turns out, real estate agents keep their own houses on the market longer, and sell them for higher prices, than the houses of their clients. Why? Because when it’s your house they’re selling, they make only a small percentage of the difference of the higher price, whereas when it’s their house, they get the whole difference. The latter is enough money to hold out for, but the former isn’t. Once you understand the incentives that real estate agents face, in other words, their true preferences, and hence their actions, become instantly clear.

Likewise, it might at first surprise you to learn that parents at an Israeli day school, when fined for picking up their children late, actually arrived late more often than they did
before any fine was imposed. But once you understand that the fine assuaged the pangs of guilt they were feeling at inconveniencing the school staff—essentially, they felt they were paying for the right to be late—it makes perfect sense. So does the initially surprising observation that most gang members live with their mothers. Once you do the math, it turns out that gang members don’t make nearly as much money as you would think; thus it makes perfect economic sense for them to live at home. Similarly, one can explain the troubling behavior of a number of high-school teachers who, in response to the new accountability standards introduced by the Bush Administration’s 2002 No Child Left Behind legislation, actually altered the test responses of their students. Even though cheating could cost them their jobs, the risk of getting caught seemed small enough that the cost of being stuck with a low-performing class outweighed the potential for being punished for cheating.
6

Regardless of the person and the context, in other words—sex, politics, religion, families, crime, cheating, trading, and even editing Wikipedia entries—the point that Levitt and Dubner keep returning to is that if we want to understand why people do what they do, we must understand the incentives that they face, and hence their preference for one outcome versus another. When someone does something that seems strange or puzzling to us, rather than writing them off as crazy or irrational, we should instead seek to analyze their situation in hopes of finding a rational incentive. It is precisely this sort of exercise, in fact, that we went through in the last chapter with the ultimatum game experiments. Once we discover that the Au and Gnau tradition of gift exchange effectively transforms what to us looks like free money into something that to them resembles an unwelcome future obligation, what was previously puzzling
behavior suddenly seems as rational as our own. It is just rational according to a different set of premises than we were familiar with before. The central claim of
Freakonomics
is that we can almost always perform this exercise, no matter how weird or wonderful is the behavior in question.

As intriguing and occasionally controversial as Levitt and Dubner’s explanations are, in principle they are no different from the vast majority of social scientific explanations. However much sociologists and economists might argue about the details, that is, until they have succeeded in accounting for a given behavior in terms of some combination of motivations, incentives, perceptions, and opportunities—until they have, in a word,
rationalized
the behavior—they do not feel that they have really understood it.
7
And it is not only social scientists who feel this way. When we try to understand why an ordinary Iraqi citizen would wake up one morning and decide to turn himself into a living bomb, we are implicitly rationalizing his behavior. When we attempt to explain the origins of the recent financial crisis, we are effectively searching for rational financial incentives that led bankers to create and market high-risk assets. And when we blame soaring medical costs on malpractice legislation or procedure-based payments, we are instinctively invoking a model of rational action to understand why doctors do what they do. When we think about how we think, in other words, we reflexively adopt a framework of rational behavior.
8

THINKING IS ABOUT MORE THAN THOUGHT

The implicit assumption that people are rational until proven otherwise is a hopeful, even enlightened, one that in general ought to be encouraged. Nevertheless, the exercise
of rationalizing behavior glosses over an important difference between what we mean when we talk about “understanding” human behavior, as opposed to the behavior of electrons, proteins, or planets. When trying to understand the behavior of electrons, for example, the physicist does not start by imagining himself in the circumstances of the electrons in question. He may have intuitions concerning theories about electrons, which in turn help him to understand their behavior. But at no point would he expect to understand what it is actually like to
be
an electron—indeed, the very notion of such intuition is laughable. Rationalizing human behavior, however, is precisely an exercise in simulating, in our mind’s eye, what it would be like to be the person whose behavior we are trying to understand. Only when we can imagine this simulated version of ourselves responding in the manner of the individual in question do we really feel that we have understood the behavior in question.

So effortlessly can we perform this exercise of “understanding by simulation” that it rarely occurs to us to wonder how reliable it is. And yet, as the earlier example of the organ donors illustrates, our mental simulations have a tendency to ignore certain types of factors that turn out to be important. The reason is that when we think about how we think, we instinctively emphasize consciously accessible costs and benefits such as those associated with motivations, preferences, and beliefs—the kinds of factors that predominate in social scientists’ models of rationality. Defaults, by contrast, are a part of the environment in which the decision maker operates, and so affect behavior in a way that is largely invisible to the conscious mind, and therefore largely absent from our commonsense explanations of behavior.
9
And defaults are just the proverbial tip of the iceberg. For several decades,
psychologists and, more recently, behavioral economists have been examining human decision-making, often in controlled laboratory settings. Their findings not only undermine even the most basic assumptions of rationality but also require a whole new way of thinking about human behavior.
10

In countless experiments, for example, psychologists have shown that an individual’s choices and behavior can be influenced by “priming” them with particular words, sounds, or other stimuli. Subjects in experiments who read words like “old” and “frail” walk more slowly down the corridor when they leave the lab. Consumers in wine stores are more likely to buy German wine when German music is playing in the background, and French wine when French music is playing. Survey respondents asked about energy drinks are more likely to name Gatorade when they are given a green pen in order to fill out the survey. And shoppers looking to buy a couch online are more likely to opt for an expensive, comfortable-looking couch when the background of the website is of fluffy white clouds, and more likely to buy the harder, cheaper option when the background consists of dollar coins.
11

Our responses can also be skewed by the presence of irrelevant numerical information. In one experiment, for example, participants in a wine auction were asked to write down the last two digits of their social security numbers before bidding. Although these numbers were essentially random and certainly had nothing to do with the value a buyer should place on the wine, researchers nevertheless found that the higher the numbers, the more people were willing to bid. This effect, which psychologists call anchoring, affects all sorts of estimates that we make, from estimating the number of countries in the African Union to how much money we consider to be a fair tip or
donation. Whenever you receive a solicitation from a charity with a “suggested” donation amount, in fact, or a bill with precomputed tip percentages, you should suspect that your anchoring bias is being exploited—because by suggesting amounts on the high side, the requestor is anchoring your initial estimate of what is fair. Even if you subsequently adjust your estimate downward—because, say, a 25 percent tip seems like too much—you will probably end up giving more than you would have without the initial suggestion.
12

Individual preferences can also be influenced dramatically simply by changing the way a situation is presented. Emphasizing one’s potential to lose money on a bet, for example, makes people more risk averse while emphasizing one’s potential to win has the opposite effect, even when the bet itself is identical. Even more puzzling, an individual’s preferences between two items can be effectively reversed by introducing a third alternative. Say, for example, that option A is a high-quality, expensive camera while B is both much lower quality and also much cheaper. In isolation, this could be a difficult comparison to make. But if, as shown in the figure below, I introduce a third option, C1, that is clearly more expensive than A and around the same quality, the choice between A and C1 becomes unambiguous. In these situations people tend to pick A, which seems perfectly reasonable until you consider what happens if I introduce instead of C1 a third option, C2, that is about as expensive as B yet significantly lower quality. Now the choice between B and C2 is clear, and so people tend to pick B. Depending on which third option is introduced, in other words, the preference of the decision maker can effectively be reversed between A and B, even though nothing about either has changed. What’s even stranger is that the third option—the one that causes the switch in preferences—is never itself chosen.
13

Illustration of preference reversal

Continuing this litany of irrationality, psychologists have found that human judgments are often affected by the ease with which different kinds of information can be accessed or recalled. People generally overestimate the likelihood of dying in a terrorist attack on a plane relative to dying on a plane from
any
cause, even though the former is strictly less likely than the latter, simply because terrorist attacks are such vivid events. Paradoxically, people rate themselves as less assertive when they are asked to recall instances where they have acted assertively—not because the information contradicts their beliefs, but rather because of the effort required to recall it. They also systematically remember their own past behavior and beliefs to be more similar to their current behavior and beliefs than they really were. And they are more likely to believe a written statement if the font is easy to read, or if they have read it before—even if the last time they read it, it was explicitly labeled as false.
14

Other books

A Mermaid’s Wish by Viola Grace
Definitely Naughty by Jo Leigh
Lamplighter by D. M. Cornish
The Key of Kilenya by Andrea Pearson
The Principal's Office by Jasmine Haynes
Changelings by Jo Bannister
The Mill House by Susan Lewis