Manufacturing depression (43 page)

Read Manufacturing depression Online

Authors: Gary Greenberg

BOOK: Manufacturing depression
12.71Mb size Format: txt, pdf, ePub

But the case had an even more direct impact on the treatment of depression. Osheroff had gone into Chestnut Lodge at the height of psychiatry’s thrash over DSM-III and filed his lawsuit just after it was published. In the new psychiatric world, with its diagnostic specificity and its magic-bullet drugs, therapists’ difficulty in passing scientific muster was a new kind of problem. As Gerald Klerman, writing in 1990 about the Osheroff case, put it:

If a pharmaceutical firm makes a claim
for the efficacy of one of it products, it must generate enough evidence to satisfy the Food and Drug Administration before it can market the drug…No such mandate of responsibility exists for psychotherapy. Anyone can make a claim for the value of a form of psychotherapy…with no evidence as to its efficacy.

 

The moral of the Osheroff story, Klerman said, was that it was time to require of psychotherapies what Kefauver-Harris had required of drugs: proof that they worked.

 

It’s not that no one had tried to do that. In 1936, in fact, a prominent American psychologist, Saul Rosenzweig, published a
paper examining therapy outcomes and concluded that all forms of therapy, competently practiced, were equally effective. Rosenzweig lifted the subtitle for his paper—
“Everyone Has Won
and All Must Have Prizes”—from the dodo bird’s verdict on the race in
Alice in Wonderland
. That might have been an unfortunate choice. His conclusion has gone down in history as the
dodo bird effect
—not an embarrassment of riches, that is, but just plain embarrassing.

In 1975, a team led by psychologist Lester
Luborsky subjected the dodo bird
effect to modern statistical methods. They looked at studies comparing one therapy to another, therapy to no therapy, psychotherapy to drug therapy, and time-limited to interminable therapies and concluded that all indeed must have prizes.

Luborsky also determined that there was nothing specific to a given therapy that accounted for its success. In part this was because the therapists generally chose the outcome measures, but even when the measure was an objective test (like the HAM-D), the dodo bird effect held. Luborsky suggested an explanation:
“The different forms of psychotherapy
have major common elements—a helping relationship with a therapist…along with the other related, nonspecific effects such as suggestion and abreaction [Freudian jargon for emotional catharsis].” These common elements—nonspecific factors—accounted for therapy’s success.

Luborsky’s work got updated
from time to time, using increasingly sophisticated and impenetrable statistical techniques, and the result was virtually always the same. Something like three-quarters of patients are better off with therapy than they were without it. Patients themselves ratified this result, at least they did in a survey that appeared in
Consumer Reports.
There was
“convincing evidence that therapy
can make an important difference” the magazine reported, adding that the most important factors were “competence [of the therapist] and personal chemistry”—not the particular school the therapist subscribed to or the techniques he employed. The conclusion is inescapable: to the extent that therapy succeeds, it’s due not to the particular help that’s offered, but rather
to the fact that something is offered in the first place, and by a person whom the patient expects, and believes, will help. Therapy, no less than drugs, works by the placebo effect.

This shouldn’t be a surprise. To the extent that it is understood, the placebo effect seems to be the result of a patient’s entering into a caring relationship with a healer, which is a much more explicit feature of psychotherapy than of general medicine. Nor should this be bad news. It just means that when therapists listen with empathy, when we offer support and understanding, when we help people to pick up their pieces and fashion a story out of them, to make as much sense of their lives as they can and to withstand the uncertainty of whatever is left over, when we provide a space in which they are free to be just as confused and demoralized and ambivalent as they really are—that when we do all that, and when we do it well, it really does help. It would no doubt be better to have a world in which we therapists weren’t necessary, where narrative coherence wasn’t so hard to come by and people weren’t driven into private rooms to plumb the depths of their fears and their hopelessness, but that’s not this world, so having those rooms, and the professionals who occupy them, is the next best thing.

Still, psychotherapists, and particularly cognitive therapists, have not been content to take their prizes and go home. To the contrary, when psychopharmacologists like Klerman sounded off about the lack of evidence for therapy’s efficacy, or when Donald Klein, another leading antidepressant researcher, complained that
“psychotherapies are not doing
anything specific,” the professional guilds didn’t make the obvious point about the pot and the kettle. Nor did they take Klein’s comment that therapies are “nonspecifically beneficial to the final common pathway of demoralization” as an unintended compliment and trumpet the value of remoralization and their unique ability to bring it about.

Instead, they panicked.
“If clinical psychology is to survive
in this heyday of biological psychiatry,” a task force of the American Psychological Association warned in 1993, “APA must act to emphasize the
strength of what we have to offer—a variety of psychotherapies of proven efficacy.” The gauntlet had been thrown down, said the task force, and therapists had to pick it up by meeting drugs on their own ground—in controlled clinical trials that would identify empirically supported therapies (ESTs). It turns out that you can make at least one kind of therapy into something like a drug—a specific treatment that can be given in known doses, whose active ingredient attacks a specific disease, and whose effects can be measured. And the DSM-III provided empirical therapies with a perfect target: depression.

It’s not an accident that more than 90 percent of EST trials focus on cognitive therapy. From the beginning, even before the DSM-III’s clinical-trial-friendly symptom lists, Aaron Beck had set out to create a therapy whose effects on depression could be validated scientifically. He did this by developing his theory that depression is caused by dysfunctional thoughts and core beliefs—and a treatment targeted directly at those causes, one that could be broken down into specific modules, standardized in a treatment manual, and taught to therapists, whose performance could in turn be evaluated by reviewing tapes of sessions and scoring them on the Cognitive Therapist Rating Scale. Beck also developed a test—the Beck Depression Inventory (BDI)—to measure the outcome. If you think there’s a circular logic at work here, not to mention a conflict of interest, you’re probably right. But it’s no worse than what Max Hamilton did when he fashioned his test to meet the needs of his drug company patrons. Besides, it’s easy to overlook such matters when the theory allows cognitive therapists to claim that they are attacking the psychological mechanisms of depression in the same precise way that antidepressants attack neurotransmitter imbalances.

 

In the mid-1970s,
Beck got a chance
to put his theory to the gold-standard test—a clinical trial. His team got a government grant to compare cognitive therapy to antidepressant drugs as a treatment for neurotic depression (as defined in DSM-II). The study had a simple
design. All forty-one subjects were given tests, including the BDI and the HAM-D, at the beginning of the trial. Half were then given tricyclic antidepressants, the other half cognitive therapy, and at the end of the twelve-week trial they were retested. Cognitive therapy won hands down. Therapy patients’ scores on the tests dropped significantly more than those of the subjects on drugs. And, presumably because of the unpleasant side effects of the drugs, many fewer people dropped out of the therapy cohort than the antidepressant cohort.

The trial went on to have
“a profound effect
on the course of depression outcome research”—not only because of its results, but also because of how they were obtained. Beck and his team had done as much as possible to control for nonspecific factors. They had not only carefully measured the dose of therapy and continuously monitored therapists’ adherence to the treatment manual; they had also chosen inexperienced therapists, medical residents and psychology interns who presumably hadn’t yet picked up the tricks of the trade, who couldn’t command confidence or deploy empathy like thirty-year veterans do, and whose successes could thus be attributed more to what was in the treatment manual than what was in their personality or technique.
Beck could then plausibly claim
that he had obtained his results with a minimum of placebo effects and a maximum of “active ingredient,” that the reason CT outdistanced drugs was that there was something in the manual that was specifically therapeutic.

This impression was only strengthened over the next fifteen years as
researchers replicated
the finding that CT was as good as or better than drug treatment and added studies testing it against no therapy at all (other than an intake interview and placing the subject on a waiting list), and even against other therapies. As the findings mounted, professional and public opinion followed. In 1996, the
New York Times
reported that cognitive therapy was
“the most scientifically tested
form of psychotherapy…as effective as medication and traditional psychotherapy in helping patients with
depression.” In 2000, the American Psychiatric Association issued practice guidelines asserting that cognitive therapy was among the therapies with
“the best-documented effectiveness
in the literature for the specific treatment of major depressive disorder.” Gerald Klerman’s dream of government regulation of therapy hasn’t yet come true, but a therapist not using cognitive therapy for depression would find himself on the margins of his profession. At least according to the
Times,
by 2006, cognitive therapy had become
“the most widely practiced approach
in America.”

Dig into the clinical trials that give cognitive therapy its stranglehold on depression treatment, however, and its claim to the status as the most effective therapy begins to seem less than scientific. It turns out that cognitive therapy resembles antidepressant treatment in a way that Aaron Beck couldn’t have intended: like the drugs, it owes its marketplace dominance less to science than to its unique suitability to the particulars of the scientific game, and much more to the placebo effect than anyone wants to admit.

 

Some of the trouble is built into the idea of validating therapy. It’s hard to think of an enterprise less suited to lab testing than psychotherapy. What are the criteria of success and how do you measure them? How do you take all the thousands of words that are exchanged between therapist and patient—and for that matter all the nonverbal exchanges, the averted eyes and the fidgeting, the fleeting smile and brimming tears—and render them into data bits? The solution that researchers have hit upon is to ignore as much of that fuzzy stuff as possible and focus instead on what they can measure. This generally means doing exactly what Beck did: standardizing the treatment in a manual, aiming it at specific targets, such as the symptoms of depression found in the DSM, and then measuring the change in those symptoms after the therapy is implemented.

Critics complain
that while this approach may work well in the laboratory, it has precious little relationship to what goes on in the
real world. The lab therapist, indeed, does exactly the opposite of what most real-life therapists do: refrains from clinical judgment in favor of the manual and limits his focus to a set of symptoms rather than to the patient as a whole.
“Psychotherapy is essentially
concerned with people, not conditions or disorders,” wrote one dissenting psychiatrist, “and its methods arise out of an intimate relationship…that cannot easily be reduced to a set of prescribed techniques.” Add to this objection the fact that for both subject and therapist the proceedings are framed as a research project rather than as an encounter whose intention is to ease psychic suffering, and you have to wonder if the therapy studied in clinical trials is merely an artifact, a bell jar version of the real thing.

Cognitive therapists are aware of this disconnect, or at least Leslie Sokol is. On the first day of our workshop, she told us not to sweat the data too hard, at least not the part about people getting better after a prescribed dose of sessions. “Cognitive therapy is thought of as time limited because research demanded it,” she said. “We delivered this amount of sessions not because there was a magic number but because we were running trials and we can’t run them indefinitely. Time limited,” she added, “really means goal limited.” (It also evidently means “having it both ways,” as in claiming to have a lab-validated treatment model that specifies a certain dose of therapy, but then, when out of the glare of the lab lights, not sticking to it.)

Cognitive therapists don’t only claim that their treatment works;
they also assert
that it is superior to therapies that haven’t been tested. This is another advantage of adopting the drug model; according to the logic of clinical trials, absence of evidence is evidence of absence. That’s why Steven Hollon, an early collaborator with Aaron Beck and a leader in the field, can get away with writing that the fact that
“empirically supported therapies
are still not widely practiced…[means] that many patients do not have access to adequate treatments”—as if it had already been proved that the only adequate treatments are empirically supported therapies.

That’s not the only way that the fix is in. Consider
what happens when researchers try
to institute placebo controls. In a drug trial, the placebo is a pill, and it is at least arguable that the only difference between the placebo and the drug is whatever is inside the two pills, so long as the patient is otherwise treated the same. Early EST trials used waiting lists as the placebo treatment; people do indeed get better merely by being told that help is on the way. But that procedure does not allow researchers to zero in on the active ingredient—assuming such a thing exists—of a given therapy.

Other books

Icing on the Cake by Sheryl Berk
Cape Breton Road by D.R. MacDonald
Legions by Karice Bolton
Ghost Claws by Jonathan Moeller
Magicide by Carolyn V. Hamilton