Everything Is Obvious (5 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
13.79Mb size Format: txt, pdf, ePub

But when it comes to the human world, where our unaided intuition is so much better than it is in physics, we rarely feel the need to use the scientific method. Why is it, for example, that most social groups are so homogeneous in terms of race, education level, and even gender? Why do some things become
popular and not others? How much does the media influence society? Is more choice better or worse? Do taxes stimulate the economy? Social scientists are endlessly perplexed by these questions, yet many people feel as though they could come up with perfectly satisfactory explanations themselves. We all have friends, most of us work, and we generally buy things, vote, and watch TV. We are constantly immersed in markets, politics, and culture, and so are intimately familiar with how they work—or at least that is how it seems to us. Unlike problems in physics, biology, and so on, therefore, when the topic is human or social behavior, the idea of running expensive, time-consuming “scientific” studies to figure out what we’re pretty sure we already know seems largely unnecessary.

HOW COMMON SENSE FAILS US

Without a doubt, the experience of participating in the social world greatly facilitates our ability to understand it. Were it not for the intimate knowledge of our own thought processes, along with countless observations of the words, actions, and explanations of others—both experienced in person and also learned remotely—the vast intricacies of human behavior might well be inscrutable. Nevertheless, the combination of intuition, experience, and received wisdom on which we rely to generate commonsense explanations of the social world also disguises certain errors of reasoning that are every bit as systematic and pervasive as the errors of commonsense physics. Part One of this book is devoted to exploring these errors, which fall into three broad categories.

The first type of error is that when we think about why people do what they do, we invariably focus on factors like incentives, motivations, and beliefs, of which we are consciously
aware. As sensible as it sounds, decades of research in psychology and cognitive science have shown that this view of human behavior encompasses just the tip of the proverbial iceberg. It doesn’t occur to us, for example, that the music playing in the background can influence our choice of wine in the liquor store, or that the font in which a statement is written may make it more or less believable; so we don’t factor these details into our anticipation of how people will react. But they do matter, as do many other apparently trivial or seemingly irrelevant factors. In fact, as we’ll see, it is probably impossible to anticipate everything that might be relevant to a given situation. The result is that no matter how carefully we try to put ourselves in someone else’s shoes, we are likely to make serious mistakes when predicting how they’ll behave anywhere outside of the immediate here and now.

If the first type of commonsense error is that our mental model of individual behavior is systematically flawed, the second type is that our mental model of collective behavior is even worse. The basic problem here is that whenever people get together in groups—whether at social events, workplaces, volunteer organizations, markets, political parties, or even as entire societies—they interact with one another, sharing information, spreading rumors, passing along recommendations, comparing themselves to their friends, rewarding and punishing each other’s behaviors, learning from the experience of others, and generally influencing one another’s perspectives about what is good and bad, cheap and expensive, right and wrong. As sociologists have long argued, these influences pile up in unexpected ways, generating collective behavior that is “emergent” in the sense that it cannot be understood solely in terms of its component parts. Faced with such complexity, however, commonsense explanations instinctively fall back on the logic of individual action. Sometimes
we invoke fictitious “representative individuals” like “the crowd,” “the market,” “the workers,” or “the electorate,” whose actions stand in for the actions and interactions of the many. And sometimes we single out “special people,” like leaders, visionaries, or “influencers” to whom we attribute all the agency. Regardless of which trick we use, however, the result is that our explanations of collective behavior paper over most of what is actually happening.

The third and final type of problem with commonsense reasoning is that we learn less from history than we think we do, and that this misperception in turn skews our perception of the future. Whenever something interesting, dramatic, or terrible happens—Hush Puppies become popular again, a book by an unknown author becomes an international best seller, the housing bubble bursts, or terrorists crash planes into the World Trade Center—we instinctively look for explanations. Yet because we seek to explain these events only after the fact, our explanations place far too much emphasis on what actually happened relative to what might have happened but didn’t. Moreover, because we only try to explain events that strike us as sufficiently interesting, our explanations account only for a tiny fraction even of the things that do happen. The result is that what appear to us to be causal explanations are in fact just stories—descriptions of what happened that tell us little, if anything, about the mechanisms at work. Nevertheless, because these stories have the form of causal explanations, we treat them as if they have predictive power. In this way, we deceive ourselves into believing that we can make predictions that are impossible, even in principle.

Commonsense reasoning, therefore, does not suffer from a single overriding limitation but rather from a combination of limitations, all of which reinforce and even disguise one another. The net result is that common sense is wonderful
at
making sense
of the world, but not necessarily at understanding it. By analogy, in ancient times, when our ancestors were startled by lightning bolts descending from the heavens, accompanied by claps of thunder, they assuaged their fears with elaborate stories about the gods, whose all-too-human struggles were held responsible for what we now understand to be entirely natural processes. In explaining away otherwise strange and frightening phenomena in terms of stories they did understand, they were able to make sense of them, effectively creating an illusion of understanding about the world that was enough to get them out of bed in the morning. All of which is fine. But we would not say that our ancestors “understood” what was going on, in the sense of having a successful scientific theory. Indeed, we tend to regard the ancient mythologies as vaguely amusing.

What we don’t realize, however, is that common sense often works just like mythology. By providing ready explanations for whatever particular circumstances the world throws at us, commonsense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe. The cost, however, is that we think we have understood things that in fact we have simply papered over with a plausible-sounding story. And because this illusion of understanding in turn undercuts our motivation to treat social problems the way we treat problems in medicine, engineering, and science, the unfortunate result is that common sense actually inhibits our understanding of the world. Addressing this problem is not easy, although in Part Two of the book I will offer some suggestions, along with examples of approaches that are already being tried in the worlds of business, policy, and science. The main point, though, is that just as an unquestioning belief in the
correspondence between natural events and godly affairs had to give way in order for “real” explanations to be developed, so too, real explanations of the social world will require us to examine what it is about our common sense that misleads us into thinking that we know more than we do.
25

CHAPTER 2
Thinking About Thinking

In many countries around the world, it is common for the state to ask its citizens if they will volunteer to be organ donors. Now, organ donation is one of those issues that elicit strong feelings from many people. On the one hand, it’s an opportunity to turn one person’s loss into another person’s salvation. But on the other hand, it’s more than a little unsettling to be making plans for your organs that don’t involve you. It’s not surprising, therefore, that different people make different decisions, nor is it surprising that rates of organ donation vary considerably from country to country. It might surprise you to learn, however, how much cross-national variation there is. In a study conducted a few years ago, two psychologists, Eric Johnson and Dan Goldstein, found that rates at which citizens consented to donate their organs varied across different European countries, from as low as 4.25 percent to as high as 99.98 percent. What was even more striking about these differences is that they weren’t scattered all over the spectrum, but rather were clustered into two distinct groups—one group that had organ-donation rates in the single digits and teens, and one group that had rates in the high nineties—with almost nothing in between.
1

What could explain such a huge difference? That’s the question I put to a classroom of bright Columbia undergraduates not long after the study was published. Actually, what I asked them to consider was two anonymous countries, A and B.
In country A, roughly 12 percent of citizens agree to be organ donors, while in country B 99.9 percent do. So what did they think was different about these two countries that could account for the choices of their citizens? Being smart and creative students, they came up with lots of possibilities. Perhaps one country was secular while the other was highly religious. Perhaps one had more advanced medical care, and better success rates at organ transplants, than the other. Perhaps the rate of accidental death was higher in one than another, resulting in more available organs. Or perhaps one had a highly socialist culture, emphasizing the importance of community, while the other prized the rights of individuals.

All were good explanations. But then came the curveball. Country A was in fact Germany, and country B was … Austria. My poor students were stumped—
what on earth could be so different about Germany and Austria?
But they weren’t giving up yet. Maybe there was some difference in the legal or education systems that they didn’t know about? Or perhaps there had been some important event or media campaign in Austria that had galvanized support for organ donation. Was it something to do with World War II? Or maybe Austrians and Germans are more different than they seem. My students didn’t know what the reason for the difference was, but they were sure it was
something
big—you don’t see extreme differences like that by accident. Well, no—but you can get differences like that for reasons that you’d never expect. And for all their creativity, my students never pegged the real reason, which is actually absurdly simple:
In Austria, the default choice is to be an organ donor, whereas in Germany the default is not to be
. The difference in policies seems trivial—it’s just the difference between having to mail in a simple form and not having to—but it’s enough to push the donor rate from 12 percent to 99.9 percent. And what was true for Austria and Germany
was true across all of Europe—all the countries with very high rates of organ donation had opt-out policies, while the countries with low rates were all opt-in.

DECISIONS, DECISIONS

Understanding the influence of default settings on the choices we make is important, because our beliefs about what people choose and why they choose it affect virtually all our explanations of social, economic, and political outcomes. Read the op-ed section of any newspaper, watch any pundit on TV, or listen to any late-night talk radio, and you will be bombarded with theories of why we choose this over that. And although we often decry these experts, the broader truth is that all of us—from politicians and bureaucrats, to newspaper columnists, to corporate executives and ordinary citizens—are equally willing to espouse our own theory of human choice. Indeed, virtually
every
argument of social consequence—whether about politics, economic policy, taxes, education, healthcare, free markets, global warming, energy policy, foreign policy, immigration policy, sexual behavior, the death penalty, abortion rights, or consumer demand—is either explicitly or implicitly an argument about why people make the choices they make. And, of course, how they can be encouraged, educated, legislated, or coerced into making different ones.

Given the ubiquity of choice in the world and its relevance to virtually every aspect of life—from everyday decisions to the grand events of history—it should come as little surprise that theories about how people make choices are also central to most of the social sciences. Commenting on an early paper by the Nobel laureate Gary Becker, the economist James Duesenberry famously quipped that “economics is all about
choice, while sociology is about why people have no choices.”
2
But the truth is that sociologists are every bit as interested in how people make choices as economists are—not to mention political scientists, anthropologists, psychologists, and legal, business, and management scholars. Nevertheless, Duesenberry had a point in that for much of the last century, social and behavioral scientists of different stripes have tended to view the matter of choice in strikingly different ways. More than anything, they have differed, sometimes acrimoniously, over the nature and importance of human rationality.

COMMON SENSE AND RATIONALITY

To many sociologists, the phrase “rational choice” evokes the image of a cold, calculating individual who cares only for himself and who relentlessly seeks to maximize his economic well-being. Nor is this reaction entirely unjustified. For many years, economists seeking to understand market behavior invoked something like this notion of rationality—sometimes referred to as “homo economicus”—in large part because it lends itself naturally to mathematical models that are simple enough to be written down and solved. And yet, as countless examples like the ultimatum game from the previous chapter show, real people care not only about their own welfare, economic or otherwise, but also the welfare of others for whom they will often make considerable sacrifices. We also care about upholding social norms and conventions, and frequently punish others who violate them—even when doing so is costly.
3
And finally, we often care about intangible benefits, like our reputation, belonging to a group, and “doing the right thing,” sometimes as much as or even more than we care about wealth, comfort, and worldly possessions.

Other books

Call If You Need Me by Raymond Carver
An Uncommon Grace by Serena B. Miller
Inside by Alix Ohlin
Displaced by Jeremiah Fastin
Until the End by Tracey Ward