Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (30 page)

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
4.77Mb size Format: txt, pdf, ePub
ads

Seeding trials raise several serious issues. To begin with, the purpose of the trial is hidden from participating patients and doctors, but also from the ethics committees giving permission for access to patients. The editorial accompanying the paper that exposed the ADVANTAGE trial is as damning, on this, as any academic journal article possibly could be.

    [These documents]…tell us that deception is the key to a successful seeding trial…Institutional review boards, whose purpose is to protect humans who participate in research, would probably not likely approve an action that places patients in harms way in order to influence physicians’ prescribing habits. If they knew, few established clinical researchers would participate as coinvestigators. Few physicians would knowingly enroll their patients in a study that placed them at risk in order to provide a company with a marketing advantage, and few patients would agree to participate. Seeding trials can occur only because the company does not disclose their true purpose to anyone who could say ‘no.’
33

So seeding trials mislead patients. It’s also poignant – for me, as a medic, at any rate – to imagine the hollow boasts from vain, arrogant, hoodwinked doctors. ‘We’re having some great results with Vioxx, actually,’ you can imagine them saying in the pub. ‘Did I tell you I’m an investigator on that trial? It’s fascinating work we’re doing…’

But there are much more concrete concerns from these trials, because they can also produce poor-quality data, since the design is geared towards marketing, rather than answering a meaningful clinical question. Collecting data from small numbers of patients in multiple different locations risks all kinds of unnecessary problems: lower quality control for the information, for example, or poorer training for research staff, increased risk of misconduct or incompetence, and so on.

This is clear from another seeding trial, called STEPS, which involved giving a drug called Neurontin to epilepsy patients in community neurology clinics. Its true purpose was revealed, again, when internal company documents were released during litigation (once again: this is why drug companies will move hell and high water to settle legal cases confidentially, and out of court).
34
As you would expect, these documents candidly describe the trial as a marketing device. One memorable memo reads: ‘STEPS is the best tool we have for Neurontin, and we should be using it wherever we can.’ To be absolutely clear, this quote isn’t discussing using the results of the trial to market the drug: it was written while the trial was being conducted.

The same ethical concerns as before are raised by this trial, as patients and doctors were once again misled. But equally concerning is the quality of the data: doctors participating as ‘investigators’ were poorly trained, with little or no experience of trials, and there was no auditing before the trial began. Each doctor recruited only four patients on average, and they were closely supervised, not by academics, but by sales representatives, who were directly involved in collecting the data, filling out study forms, and even handing out gifts as promotional rewards during data collection.

This is all especially concerning, because Neurontin isn’t a blemishless drug. Out of 2,759 patients there were 73 serious adverse events, 997 patients with side effects, and 11 deaths (though as you will know, we cannot be sure whether those are attributable to the drug). For Vioxx, the drug in the ADVANTAGE seeding trial, the situation is even more grave, as this drug was eventually taken off the market because it increased the risk of heart attacks in patients taking it. We do good-quality research in order to detect benefits, or serious problems, with medicines, and a proper piece of trial research focused on real outcomes might have helped to detect this risk much earlier, and reduced the harm inflicted on patients.

Spotting seeding trials, even today, is fraught with worry. Suspicions are high whenever a new trial is published, on a recently marketed drug, where the number of recruitment sites is suspiciously high, and only a small number of patients were recruited from each one. This is not uncommon.

But in the absence of any documentary proof that these trials were designed with viral marketing in mind, very few academics would dare to call them out in public.

Pretend it’s all positive regardless

    At the end of your trial, if your result is unimpressive, you can exaggerate it in the way that you present the numbers; and if you haven’t got a positive result at all, you can just spin harder.

All of this has been a little complicated, at times. But there is one easy way to fix an unflattering trial result: you can simply talk it up. A good example of this comes from the world of statins. From the evidence currently available on these drugs, it looks as if they roughly halve your risk of having a heart attack in a given period, regardless of how large your pre-existing risk was. So, if your risk of heart attack is pretty big – you’ve got high cholesterol, you smoke, you’re overweight, and so on – then a statin reduces your large yearly risk of a heart attack by a half. But if your risk of a heart attack is tiny, it reduces that tiny risk by half, which is a tiny change in a tiny risk. If you find it easier to visualise with a concrete example, picture this: your chances of dying from a meteor landing on your head are dramatically less if you wear a motorbike crash helmet every day, but meteors don’t land on people’s heads very often.

It’s worth noting that there are several different ways of numerically expressing a reduction in risk, and they each influence our thinking in different ways, even though they accurately describe the same reality. Let’s say your chances of a heart attack in the next year are high: forty people out of 1,000 like you will have a heart attack in the next year, or if you prefer, 4 per cent of people like you. Let’s say those people are treated with a statin, and their risk is reduced, so only twenty of them will have a heart attack, or 2 per cent. We could say this is ‘a 50 per cent reduction in the risk of heart attack’, because it’s gone from 4 per cent to 2 per cent. That way of expressing the risk is called the ‘relative risk reduction’: it sounds dramatic, as it has a nice big number in it. But we could also express the same change in risk as the ‘absolute risk reduction’, the change from 4 per cent to 2 per cent, which makes a change of 2 per cent, or ‘a 2 per cent reduction in the risk of heart attack’. That sounds less impressive, but it’s still OK.

Now, let’s say your chances of having a heart attack in the next year are tiny (you can probably see where I’m going, but I’ll do it anyway). Let’s say that four people out of 1,000 like you will have a heart attack in the next year, but if they are all on statins, then only two of them will have such a horrible event. Expressed as relative risk reduction, that’s still a 50 per cent reduction. Expressed as absolute risk reduction, it’s a 0.2 per cent reduction, which sounds much more modest.

There are many people in medicine who are preoccupied with how best to communicate such risks and results, a number of them working in the incredibly exciting field known as ‘shared decision-making’.
35
They have created all kinds of numerical tools to help clinicians and patients work out exactly what benefit they would get from each treatment option when presented with, say, different choices for chemotherapy after surgery for a breast tumour. The advantage of these tools is that they take doctors much closer to their future role: a kind of personal shopper for treatments, people who know how to find evidence, and can communicate risk clearly, but who can also understand, in discussion with patients, their interests and priorities, whether those are ‘more life at any cost’ or ‘no side effects’.

Research has shown that if you present benefits as a relative risk reduction, people are more likely to choose an intervention. One study, for example, took 470 patients in a waiting room, gave them details of a hypothetical disease, then explained the benefits of two possible treatment options.
36
In fact, both these treatments were the same, offering the same benefit, but with the risk expressed in two different ways. More than half of the patients chose the medication for which the benefit was expressed as a relative risk reduction, while only one in six chose the one whose benefit was expressed in absolute terms (most of the rest were indifferent).

It would be wrong to imagine that patients are unique in being manipulated by the way figures on risk and benefit are presented. In fact, exactly the same result has been found repeatedly in experiments looking at doctors’ prescribing decisions,
37
and even the purchasing decisions of health authorities,
38
where you would expect to find numerate doctors and managers, capable of calculating risk and benefit.

That is why it is concerning to see relative risk reduction used so frequently in reporting the modest benefits of new treatments, both in mainstream media and in professional literature. One good recent example comes, again, from the world of statins, in the coverage around the Jupiter trial.

This study looked at the benefits of an existing drug, rosuvastatin, for people with low risk of heart attack. In the UK most newspapers called it a ‘wonder drug’ (the
Daily Express
, bless it, thought it was an entirely new treatment,
39
when in reality it was a new use, in low-risk patients, of a treatment that had been used in moderate- and high-risk patients for many years). Every paper reported the benefit as a relative risk reduction: ‘Heart attacks were cut by 54 per cent, strokes by 48 per cent and the need for angioplasty or bypass by 46 per cent among the group on Crestor compared to those taking a placebo or dummy pill,’ said the
Daily Mail
. In the
Guardian
, ‘Researchers found that in the group taking the drug, heart attack risk was down by 54 per cent and stroke by 48 per cent.’
40

The numbers were entirely accurate, but as you now know, presenting them as relative risk reductions overstates the benefit. If you express the exact same results from the same trial as an absolute risk reduction, they look much less exciting. On placebo, your risk of a heart attack in the trial was 0.37 events per one hundred person years. If you were taking rosuvastatin, it fell to 0.17 events per one hundred person years. And you have to take a pill every day. And it might have side effects.

Many researchers think the best way to express a risk is by using the ‘numbers needed to treat’. This is a very concrete method, where you calculate how many people would need to take a treatment in order for one person to benefit from it. The results of the Jupiter trial were not presented, in the paper reporting the final findings, as a ‘number needed to treat’, but in that low-risk population, working it out on the back of an envelope, I calculate that a few hundred people would need to take the pill to prevent one heart attack. If you want to take rosuvastatin every day, knowing that this is the likelihood of you receiving any benefit from the drug, then that’s entirely a matter for you. I don’t know what decision I would make, and everyone is different, as you can see from the fact that some people with low risk choose to take a statin, and some don’t. My concern is only whether those results are explained to them clearly, in the newspapers, in the press release, by their doctor, and in the original academic journal article.

Let’s consider one final example. If your trial results really were a disaster, you have one more option. You can simply present them as if they were positive, regardless of what you actually found.

A group of researchers in Oxford and Paris set out to examine this problem systematically in 2009.
41
They took every trial published over one month that had a negative result, in the correct sense of the word, meaning trials which had set out in their protocol to detect a benefit on a primary outcome, and then found no benefit. They then went through the academic journal reports of seventy-two of these trials, searching for evidence of ‘spin’: attempts to present the negative result in a positive light, or to distract the reader from the fact that the main result of the trial was negative.

First they looked in the abstracts. These are the brief summaries of an academic paper, on the first page, and they are widely read, either because people are too busy to read the whole paper, or because they cannot get access to it without a paid subscription (a scandal in itself). Normally, as you scan hurriedly through an abstract, you’d expect to be told the ‘effect size’ – ‘0.85 times as many heart attacks in patients on our new super-duper heart drug’ – along with an indication of the statistical significance of this result. But in this representative sample of seventy-two trials, all with unambiguously negative results for their main outcome, only nine gave these figures properly in the abstract, and twenty-eight gave no numerical results for the main outcome of the trial at all. The negative results were simply buried.

It gets worse: only sixteen of these negative trials reported the main negative outcome of the trial properly anywhere, even in the main body of the text.

So what was in these trial reports? Spin. Sometimes the researchers found some other positive result in the spreadsheets, and pretended that this was what they had intended to count as a positive result all along (a trick we have already seen: ‘switching the primary outcome’). Sometimes they reported a dodgy subgroup analysis – again, a trick we’ve already seen. Sometimes they claimed to have found that their treatment was ‘non-inferior’ to the comparison treatment (when in reality a ‘non-inferiority’ trial requires a bigger sample of people, because you might have missed a true difference simply by chance). Sometimes they just brazenly rambled on about how great the treatment was, despite the evidence.

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
4.77Mb size Format: txt, pdf, ePub
ads

Other books

The Elementary Particles by Michel Houellebecq
Paws for Alarm by Marian Babson
Hideous Kinky by Esther Freud
Rogue's March by W. T. Tyler
I Thee Wed by Celeste Bradley
Private Showing by Jocelyn Michel