Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (31 page)

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
9.41Mb size Format: txt, pdf, ePub
ads

This paper is not a lone finding. In 2009 another group looked at papers reporting trials on prostaglandin eyedrops as a treatment for glaucoma
42
(as always, the specific condition and treatment are irrelevant; it’s the principle that is important). They found thirty-nine trials in total, with the overwhelming majority, twenty-nine of them, funded by industry. The conclusions were chilling: eighteen of the twenty industry-funded trials presented a conclusion in the abstract that misrepresented the main outcome measure. All of the non-industry-funded studies were fine.

All this is shameless, but it is possible because of structural flaws in the information architecture of academic medicine. If you don’t make people report the primary outcome in their paper, if you accept that they routinely switch outcomes, knowing full well that this distorts statistics, you are permitting results to be spun. If you don’t link protocols clearly to papers, allowing people to check one against the other for ‘bait and switch’ with the outcomes, you permit results to be spun. If editors and peer reviewers don’t demand that pre-trial protocols are submitted alongside papers, and checked, they are permitting outcome switching. If they don’t police the contents of abstracts, they are collaborators in this distortion of evidence, that distorts clinical practice, makes treatment decisions arbitrary rather than evidence-based, and so they play their part in harming patients.

Perhaps the greatest problem is that many of those who read the medical literature implicitly assume that such precautions are taken by all journal editors. But they are wrong to assume this. There is no enforcement for any of what we have covered, everyone is free to ignore it, and so commonly – as with newspapers, politicians and quacks – uncomfortable facts are cheerfully spun away.

Finally, perhaps most worryingly of all, similar levels of spin have been reported in systematic reviews and meta-analyses, which are correctly regarded as the most reliable form of evidence. One study compared industry-funded reviews with independently-funded reviews from the Cochrane Collaboration.
43
In their written conclusions, the industry-funded reviews all recommended the treatment without reservation, while none of the Cochrane meta-analyses did. This disparity is striking, because there was no difference in their numerical conclusions on the treatment effect, only in the narrative spin of the discussion in the conclusions section of the review paper.

The absence of scepticism in the industry-funded reviews was also borne out in the way they discussed methodological shortcomings of the studies they included: often, they simply didn’t. Cochrane reviews were much more likely to consider whether trials were at risk of bias; industry-funded studies brushed over these shortcomings. This is a striking reminder that the results of a scientific paper are much more important than the editorialising of the discussion section. It’s also a striking reminder that the biases associated with industry funding penetrate very deeply into the world of academia.

5

Bigger, Simpler Trials

So, we have established that there are some very serious problems in medicine. We have badly designed trials, which suffer from all kinds of fatal flaws: they’re conducted in unrepresentative patients, they’re too brief, they measure the wrong outcomes, they go missing if the results are unflattering, they get analysed stupidly, and often they’re simply not done at all, simply because of expense, or lack of incentives. These problems are frighteningly common, both for the trials that are used to get a drug on the market, and for the trials that are done later, all of which guide doctors’ and patients’ treatment decisions. It feels as if some people, perhaps, view research as a game, where the idea is to get away with as much as you can, rather than to conduct fair tests of the treatments we use.

However we view the motives, this unfortunate situation leaves us with a very real problem. For many of the most important diseases that patients present with, we have no idea which of the widely used treatments is best, and, as a consequence, people suffer and die unnecessarily. Patients, the public, and even many doctors live in blissful ignorance of this frightening reality, but in the medical literature, it has been pointed out again and again.

Over a decade ago, a
BMJ
paper on the future of medicine described the staggering scale of our ignorance. We still don’t know, it explained, which of the many current treatments is best, for something as simple as treating patients who’ve just had a stroke. But the paper also made a disarmingly simple observation: strokes are
so
common, that if we took every patient in the world who had one, and entered them into a randomised trial comparing the best treatments, we would recruit enough patients in just twenty-four hours to answer this question. And it gets better: many outcomes from stroke – like death – become clear in a matter of months, sometimes weeks. If we started doing this trial today, and analysed the results as they came in, medical management of stroke could be transformed in less time than it takes to grow a sunflower.

The manifesto implicit in this paper was very straightforward: wherever there is genuine uncertainty about which treatment is best, we should conduct a randomised trial; medicine should be in a constant cycle of revision, gathering follow-up data and improving our interventions, not as an exception, but wherever that is possible.

There are technical and cultural barriers to doing this kind of thing, but they are surmountable, and we can walk through them by considering a project I’ve been involved in, setting up randomised trials embedded in routine practice, in everyday GP surgeries.
1
These trials are designed to be so cheap and unobtrusive that they can be done whenever there is genuine uncertainty, and all the results are gathered automatically, at almost no cost, from patients’ computerised notes.

To make the design of these trials more concrete, let’s look at the pilot study, which compares two statins against each other, to see which is best at preventing heart attack and death. This is exactly the kind of trial you might naïvely think has already been done; but as we saw in the previous chapter, the evidence on statins has been left incomplete, even though they are some of the most widely prescribed drugs in the world (which is why, of course, we keep coming back to them in this book). People have done trials comparing each statin against a placebo, a rubbish comparison treatment, and found that statins save lives. People have also done trials comparing one statin with another, which is a sensible comparison treatment; but these trials all use cholesterol as a surrogate outcome, which is hopelessly uninformative. We saw in the ALLHAT trial, for example, that two drugs can be very similar in how well they treat blood pressure, but very different in how well they prevent heart attacks: so different, in fact, that large numbers of patients died unnecessarily over many years before the ALLHAT trial was done, simply because they were being prescribed the less effective drug (which was, coincidentally, the newer and more expensive one).

So we need to do real-world trials, to see which statin is best at saving lives; and I would also argue that we need to do these trials urgently. The most widely used statins in the UK are atorvastatin and simvastatin, because they are both off patent, and therefore cheap. If one of these turned out to be just 2 per cent better than the other at preventing heart attacks and death, this knowledge would save vast numbers of lives around the world, because heart attacks are so common, and because statins are so widely used. Failing to know the answer to this question could be costing us lives, every day that we continue to be ignorant. Tens of millions of people around the world are taking these drugs right now, today. They are all being exposed to unnecessary risk from drugs that haven’t been appropriately compared with each other, but they’re also all capable of producing data that could be used to gather new knowledge about which drug is best, if only they were systematically randomised, and their outcomes followed up.

Our large, pragmatic trial is very simple. Everything in GPs’ offices today is already computerised, from the appointments to the notes to the prescriptions, as you will probably already know, from going to a doctor yourself. Whenever a GP sees a patient and decides to prescribe a statin, normally they click the ‘prescribe’ button, and are taken to a page where they choose a drug, and print out a prescription. For GPs in our trial, one extra page is added. ‘Wait,’ it says (I’m paraphrasing). ‘We don’t know which of these two statins is the best. Instead of choosing one, press this big red button to randomly assign your patient to one or the other, enter them into our trial, and you’ll never have to think about it ever again.’

The last part of that last sentence is critical. At present, trials are a huge and expensive administrative performance. Many struggle to recruit enough patients, and many more struggle to recruit everyday doctors, as they don’t want to get involved in the mess of filling out patient report forms, calling patients back for extra appointments, doing extra measurements and so on. In our trial there is none of that. Patients are followed up, their cholesterol levels, their heart attacks, their weird idiosyncratic side effects, their strokes, their seizures, their deaths: all of this data is taken from their computerised health records, automatically, without anybody having to lift a finger.

These simple trials have one disadvantage, which you may already have spotted, in that they aren’t ‘blinded’, so the patients know the name of the drug they’ve received. This is a problem in some studies: if you believe that you’ve been given a very effective medicine, or that you’ve been given a rubbish one, then the power of your beliefs and expectations can affect your health, through a phenomenon known as the placebo effect. If you’re comparing a painkiller against a dummy sugar pill, then a patient who knows they’ve been given a sugar pill for pain is likely to be annoyed and in more pain. But it’s harder to believe that patients have firm beliefs about the relative benefits of atorvastatin and simvastatin, and that these beliefs will then impact on cardiovascular mortality five years later. In all research, we make a trade-off between what is ideal and what is practical, giving careful consideration to the impact that any methodological shortcomings will have on a study’s results.

So, alongside this shortcoming, it’s worth taking a moment to notice how many of the serious problems with trials can be addressed by our study design of simple trials in electronic health records. Setting aside the assumption that they will be analysed properly, without the dubious tricks mentioned in the previous chapter, there are other, more specific benefits. Firstly, as we know, trials are frequently conducted in unrepresentative ‘ideal patients’, and in odd settings. But the patients in our simple pragmatic trials are exactly like real-world patients, because they are real-world patients. They are all the people that GPs prescribe statins to. Secondly, because trials are expensive, stand-alone administrative entities, and because they struggle to recruit patients, they are often small. Our pragmatic trial, meanwhile, is vanishingly cheap to run, because almost all of the work is done using existing data – it cost £500,000 to set up this first trial, and that included building the platform that can be used to run any trial you like in the future. This is exceptionally cheap in the world of trials. Thirdly, trials are often brief, and fail to look at real-world outcomes: our simple trial runs forever, and we can collect follow-up data and monitor whether people have had a heart attack, or a stroke, or died, for decades to come, at almost no cost, by following their progress through the computerised health records that are being produced by their doctors anyway.

All this is made possible in Britain because of the GP Research Database, or GPRD, which has been running for many years. This contains anonymised medical records of several million patients from participating GPs’ surgeries, and is already widely used to do the kinds of side-effects monitoring studies I discussed earlier: in fact, this database is currently owned and run by the MHRA itself. So far, however, it has only been used for observational research, rather than randomised trials: people’s prescriptions and medical conditions are monitored, and analysed in bulk, in the hope that we can spot patterns. This can be helpful, and has been used to generate useful information about several medicines, but it can also be very misleading, especially when you try to compare the benefits of different treatment options.

This is because, often, the people given one treatment aren’t quite the same as the people given another, even though you think they are. There can be odd, unpredictable reasons why some patients are prescribed one drug, and some another, and it’s very hard to work out what these reasons are, or to account for them after the fact, when you’re analysing data you’ve collected from routine medical practice in the real world.

For example, maybe people in a posh area are more likely to be prescribed the more expensive of two similar drugs, because budgets in that clinic are less pressed, and the expensive one is more heavily marketed. If so, then even though the expensive drug is no better than a cheaper alternative, it would appear superior in the observational data, because wealthy people, overall, are healthier. This effect can also make drugs look worse than they really are. Many people have mild kidney problems, for example, which grumbles along in the background alongside their other medical problems; it causes them no specific health issues, but their doctor is aware, from blood tests, that their kidneys are no longer clearing things from their bloodstream quite as efficiently as they do for the healthiest people in the population. When these patients are being treated for depression, say, or high blood pressure, maybe they will be put on a drug that is regarded as having a better safety profile, just to be on the safe side, on account of their mild kidney problems. In this case, that drug will look much
less
effective than it really is, when you follow up the patients’ outcomes, because many of the people receiving it were sicker to start with: the patients with minor things, like mild kidney problems, were actively channelled onto the drug believed to be safest.

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
9.41Mb size Format: txt, pdf, ePub
ads

Other books

A Deniable Death by Seymour, Gerald
Branded by Fire by Nalini Singh
The Hurlyburly's Husband by Jean Teulé
The Bride of Windermere by Margo Maguire
Back to Bologna by Michael Dibdin