Read Priceless: The Myth of Fair Value (and How to Take Advantage of It) Online
Authors: William Poundstone
Tags: #Marketing, #Consumer Behavior, #Economics, #Business & Economics, #General
By the way, how did
your
answer compare to the 65-group’s average of 45 percent? In case you’re wondering, the correct fraction of African U.N. member nations is currently 23 percent.
The initial response to anchoring was denial (and that’s not the name of a river flowing through those African nations). “The default reaction to a paper is to ignore it,” Kahneman explained. In this case, scholars were convinced the paper had to be wrong. It seemed incredible that a simple parlor trick could have such a large effect on educated people’s judgment.
Psychologists have since replicated the anchoring experiment with many variations. You do not need a wheel of fortune, or a random number, to have anchoring. You don’t even need a
reasonable
number. Psychologist George Quattrone tried these questions:
• Is the average temperature in San Francisco higher or lower than 558 degrees Fahrenheit? What is the average temperature of San Francisco?
• How many top-ten records did the Beatles release—more than 100,025, or less than 100,025? Now give your estimate of the number of top-ten Beatles records.
These numbers are completely wacko. You’d think they couldn’t possibly affect guesses about how warm San Francisco is, or how many topten Beatles records there were . . . except that they
did
. People primed with these and other absurdly high anchors gave higher estimates than those who received low anchors.
Now of course no one guessed the temperature of San Francisco was anything close to 500 degrees. Everyone knew it was a two-digit number, somewhere between room temperature and freezing. Anchoring is constrained by whatever people know or believe to be true. A geography wonk who knows the percentage of African U.N. members will give that correct answer and not be swayed by a random number. Anchoring is an artifact of guessing.
A team led by Timothy Wilson of the University of Virginia did an experiment in which they offered a prize—dinner for two at a popular restaurant—for the most accurate estimate of the number of physicians in the local phone book. This was again posed as a two-part question, with high and low anchors for different groups. Wilson and company reasoned that the incentive of an expensive dinner might cause the subjects to
concentrate
on getting the best answer and not to rattle off any silly number that popped into their heads. Instead, they found that the anchoring effect was almost as strong with the incentive as without it.
Wilson’s group even tried warning about the perils of anchoring. One set of participants received instructions saying that “a number in people’s heads can influence their answers to subsequent questions . . . When you answer the questions on the following pages,
please be careful not to have this contamination effect happen to you
. We would like the most accurate estimates you can come up with.”
The warning didn’t work. The subjects’ estimates were still influenced by meaningless numbers. Most likely, those who got the warning
did
try to correct for anchoring, Wilson’s team proposes. But they
couldn’t do it, any more than someone can obey the instruction
not
to think of an elephant.
“We suggest that because anchoring effects occur unintentionally and unconsciously, it was difficult for people to know the extent to which an anchor value influenced their estimates,” Wilson’s group wrote. “As a result, they were at the mercy of naïve theories about how susceptible they were to anchoring effects.”
For “naïve theories,” read: anchoring can’t happen to
me
.
It is often necessary to translate personal values into numbers that can be communicated to others. Anchoring appears to be a feature (bug?) of the mental software that lets us do that. Whenever we guesstimate an unknown quantity that cannot be calculated, we are liable to be influenced by other numbers just mentioned or considered. This isn’t something we’re aware of—it takes experiments with groups to demonstrate it statistically—but it is real nonetheless. Anchoring is part of the process that helps us to make wild guesses and have hunches; to jot offers and counteroffers on cocktail napkins; to rate restaurants and sexual partners on a scale of 1 to 10; and, generally, to function in a number- and money-obsessed society. Anchoring works with all kinds of numbers—including those prefixed with dollar signs.
For a good example of anchoring in action, check out the prices charged for Broadway and Las Vegas show tickets. “Cheap seats don’t sell,” one candid (and anonymous) Broadway producer told the blog TalkinBroadway in 1999. “You know why they don’t sell? Because if you price Orchestra or Mezzanine seats real cheap, people think there is something wrong with them.”
Broadway depends on tourists who have a limited time to pick a show and may have only a sketchy notion of what they’re buying. Least of all are they in a position to judge how much specific seats are worth. In assessing the value of a seat, there’s not much a tourist can do except take a cue from the ticket’s price (“you get what you pay for”). A ticket’s perceived value is proportional to its price, almost regardless of what that price is. Many believe that the $480 premium orchestra seats for
The Producers
were a factor in that show’s long, profitable run. Tourists figured that any show with $480 tickets must be worth seeing—and headed for the TKTS booth.
That’s an important point: theatergoers who wouldn’t dream of paying $480 for a ticket were still affected by that price. It made whatever they did pay seem like a deal. (It’s the same show, after all.) “Scaling the house” is the process of assigning prices to theater or concert seats in different parts of the venue. It’s a vital part of the business, often making the difference between a sold-out and half-empty house. The anonymous producer revealed that
I now scale all the Orchestra and most of the Mezzanine seats at top price. If you do that, you sell them in a heartbeat . . . I can scale a house so I got a dozen different prices—from top to real cheap—and sell out the top-priced seats and have most of the cheaper seats empty. Or, I can scale a house where 70–80% of it is top price. You know what, when most of the seats are top price, even if I send 40% of the tickets for a performance to the TKTS Booth, I still make more money.
For years, the Hollywood Bowl has offered tickets as cheap as one dollar to its summer concerts. The Bowl is run by the County of Los Angeles, and the dollar seats are intended as a public service. The trouble is that those who’ve never tried them assume they’re awful. The Bowl is a huge place (17,376 seats), and the one-dollar seats are the farthest from the stage. But the musical experience is essentially the same (amplified, and supplemented with the occasional police helicopter). The view of the sunset and city is better from the dollar seats. Much of the time, the hundred-dollar seats are packed and unobtainable, while the one-dollar seats are empty. A lot of music lovers miss out—because the price is too
low
.
When Amos Tversky received a MacArthur grant in 1984, he joked that his work had established what was long known to “advertisers and used-car salesmen.” This was not just self-deprecating wit. At the time, those Machiavellian practitioners were probably more open to what Tversky was saying than most economists or CEOs were. Marketers had long been doing experiments in the psychology of prices. In the heyday of mail order, it was common to print up multiple versions of a catalog or flyer in order to test the effect of pricing strategies. These findings must
have dispelled any illusions about the fixity of prices. Marketers and salespeople knew too well that what a customer was willing to pay was changeable and that there was money to be made from that fact. Economist Donald Cox has gone so far as to say that much of behavioral economics is “old hat to marketing experts, who have long since booted
homo economicus
out of their focus groups.”
Today there is a symbiosis between psychologists studying prices and the marketing and price consultant communities. Many leading theorists, including Tversky, Kahneman, Richard Thaler, and Dan Ariely, have published important work in marketing journals. Price consultant Simon-Kucher & Partners has an academic advisory board with scholars from three continents. Today’s marketers talk up anchoring and coherent arbitrariness—and their somewhat unnerving power. “Many people like myself who teach marketing start the course by saying, ‘We’re not about manipulating consumers, we’re about discovering needs and meeting them,’ ” said Eric Johnson of Columbia University. “And then, if you’re in the field awhile, you realize, yes, we can manipulate consumers.”
Among the first professions to take note of behavioral decision theory was the law. There was some eye-opening research on jury award anchoring published in the years before
Liebeck v. McDonald’s
. In a 1989 study, psychologists John Malouff and Nicola Schutte had four groups of mock jurors read a description of an actual personal injury case in which the defendant had been found liable. All groups were told that the defense attorney had suggested a damage award of $50,000. The one variable was the amount that they were told the plantiff’s lawyer had requested. A group informed that the plantiff’s attorney had asked for $100,000 awarded an average of $90,333. Another group, told that the attorney had demanded $700,000, awarded an average of $421,538.
Had the jurors been able to deduce a “correct” amount, it should have been the same for all the groups. The facts of the case were unchanged. But of course there is no formula for arriving at a legal award. That leaves jurors susceptible to suggestion. When you chart Malouff and Schutte’s four data points (they also exposed groups to demands of $300,000 and $500,000), you get a remarkably straight line. Though the jurors always awarded less than the plantiff’s demand, the amounts went up in lockstep with the demand.
In their wildest dreams, few attorneys imagined that jurors were that malleable. This and other studies raised the question: Just how far can you push anchoring in the courtroom? Does a smart attorney ask for a billion gazillion dollars?
The conventional wisdom says no. There is said to be a “boomerang
effect.” Over-the-top demands backfire by making the plantiff or attorney look greedy. Juries retaliate by awarding less than they would have with a more sensible demand.
Psychologists Gretchen Chapman and Brian Bornstein tested this idea in a 1996 experiment, when
Liebeck v. McDonald’s
was much in the news. They presented eighty University of Illinois students with the hypothetical case of a young woman who said she contracted ovarian cancer from birth control pills and was suing her HMO. Four groups each heard a different demand for damages: $100; $20,000; $5 million; and $1 billion. The mock jurors were asked to give compensatory damages only. Anyone who wants to believe in the jury system must find the results astonishing.
| Demand | Award (average) | |
| $100 | $990 | |
| $20,000 | $36,000 | |
| $5 million | $440,000 | |
| $1 billion | $490,000 | |
The jurors were amazingly persuadable, up through the $5 million demand. The lowball $100 demand got a piddling $990 average award. This was for a cancer said to have the plaintiff “almost constantly in pain . . . Doctors do not expect her to survive beyond a few more months.”
Increasing the demand 200-fold, to $20,000, increased the award about 36-fold, to $36,000. Demanding $5 million got another 12-fold increase on top of that.
Chapman and Bornstein’s experiment could not rule out a boomerang effect, but it found no evidence for it. Instead, it found diminishing returns. Asking for $1 billion—an utterly insane number—still got more money than asking for $5 million did. It just didn’t get much more.
Anecdotal evidence can mislead. Lawyers remember the time they asked for a lot and got less than they hoped. Any attorney crazy enough to ask for $1 billion might be disappointed by a $490,000 award and blame it on a boomerang effect. This experiment showed, however, that the billion-dollar figure fared the best of the four demands tested.
Jurors are instructed to base compensatory awards on pain and suffering.
Chapman and Bornstein asked their jurors to rate the plaintiff’s suffering on a numerical scale. They found no meaningful correlation between estimates of suffering and the amounts awarded. In other words, the variable that was supposed to matter didn’t, and a variable that was supposed to be irrelevant—the plantiff’s demand—did.
The psychologists also asked the jurors, “How likely is it that the defendant caused the plaintiff’s injury?” The reported likelihood increased modestly with the size of the award. There was thus no evidence that the billion-dollar demand hurt the credibility of the plaintiff’s case.
S. Reed Morgan, of the McDonald’s coffee lawsuit, has described attorneys such as himself as “entrepreneurs.” By seeking liability suit jackpots, professional litigators provide incentives for big companies to worry about the safety of their products. Less sympathetic observers dismiss this as “lottery litigation.” Either way, attorneys facing the legal wheel of fortune sometimes refrain from asking jurors for a specific amount. They fear that a reasonable figure might preempt a windfall, and a high-end figure could boomerang. Chapman and Bornstein’s experiment suggests otherwise. The title of their paper says it all: “The More You Ask For, the More You Get.”