Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (22 page)

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
11.11Mb size Format: txt, pdf, ePub

I was taught at medical school that in this situation, a doctor should regard the rest of the medical profession as unpaid stunt doubles: let them make the risky mistakes on your behalf, sit back, watch, learn, and then come back out when it’s safe. In some respects you could argue that this is useful advice for life more generally. But how are side effects monitored?

Once a drug is approved, its safety needs to be assessed. This is a complex business, with – to be fair – genuine methodological challenges, and glaring, unnecessary holes. The flaws are driven by unnecessary secrecy, poor communication, and an institutional reluctance to take drugs off the market. To understand these, we need to understand the basics of the field known as ‘pharmacovigilance’.

It is important to recognise, before we even begin, that drugs will always come onto the market with unforeseen side effects. This is because you need data on lots of patients to spot rare side effects, but the trials that are used to get a drug approved are usually small, totalling between five hundred and 3,000 people. In fact, we can quantify how common a side effect must be in order to be detected in such a small number of people, by using a simple piece of maths called ‘the rule of three’. If five hundred patients are studied in pre-approval trials, that is only enough patients to spot the side effects which occur more frequently than one in every 166 people; if 3,000 patients are studied, that is still only enough to spot side effects which affect more than one in every 1,000 people. The overall rule here is easy to apply: if a side effect hasn’t yet occurred in
n
patients, then you can be 95 per cent confident that it will happen in fewer than one in 3/
n
patients (there’s a mathematical explanation of why this is true in the footnote below, if you want one, but it makes my head hurt
*
). You can also use the rule of three in real life: if three hundred of your parachutes have opened just fine, for example, then assuming no other knowledge, the chance of one
not
opening, and sending you to certain death, is at least less than one in a hundred. This may or may not be reassuring for you.

Putting this in context: your drug might make one in every 5,000 people literally explode – their head blows off, their intestines fly out – through some idiosyncratic mechanism that nobody could have foreseen. But at the point when the drug is approved, after only 1,000 people have taken it, it’s very likely that you’ll
never
have witnessed one of these spectacular and unfortunate deaths. After 50,000 people have taken your drug, though, out there in the real world, you’d expect to have seen about ten people explode overall (since, on average, it makes one in every 5,000 people explode).

Now, if your drug is causing a very rare adverse event, like exploding, you’re actually quite lucky, because
weird
adverse events really stand out, as there’s nothing like them happening already. People will talk about patients who explode, they’ll write them up in short reports for academic journals, probably notify various authorities, coroners might be involved, alarm bells will generally ring, and people will look around for what is suddenly causing patients to explode very early on, probably quite soon after the first one goes off.

But many of the adverse events caused by drugs are things that happen a lot anyway. If your drug increases the chances of someone getting heart failure, well, there are a lot of people around with heart failure already, so if doctors see one more case of heart failure in their clinic, then they’re probably not going to notice, especially if this drug is given to older people, who already experience a lot of heart failure anyway. Even detecting any signal of increased heart failure in a large group of patients might be tricky.

This helps us to understand the various different mechanisms that are used to monitor side effects by drug companies, regulators and academics. They fall into roughly three groups:

 
  1. Spontaneous reports of side effects, from patients and doctors, to the regulator
  2. ‘Epidemiology’ studies looking at the health records of large groups of patients
  3. Reports of data from drug companies

Spontaneous reports are the simplest system. In most territories around the world, when a doctor suspects that a patient has developed some kind of adverse reaction to a drug, they can notify the relevant local authority. In the UK this is via something called the ‘Yellow Card System’: these freepost cards are given out to all doctors, making the system easy to use, and patients can also report suspected adverse events themselves, online at yellowcard.mhra.gov.uk (please do).

These spontaneous reports are then categorised by hand, and collated into what is effectively a giant spreadsheet, with one row for every drug on the market, and one column for every imaginable type of side effect. Then you look at how often each type of side effect is reported for each drug, and try to decide whether the figure is higher than you’d expect to see simply from chance. (If you’re statistically minded, the names of the tools used, such as ‘proportional reporting ratios’ and ‘Bayesian confidence propagation neural networks’, will give you a clue as to how this is done. If you’re not statistically minded, then you’re not missing out; at least, no more here than elsewhere in your life.)

This system is good for detecting unusual side effects: a drug that made your head and abdomen literally explode, for example, would be spotted fairly easily, as discussed. Similar systems are in place internationally, most of the results from around the world are pooled together by WHO in Uppsala, and academics or companies can apply for access, with varying success (as discussed in this long endnote
37
).

But this approach suffers from an important problem: not all adverse events are reported. The usual estimate is that in Britain, only around one in twenty gets fed back to the MHRA.
38
This is not because all doctors are slack. It would actually be perfect if that was the cause, because then at least we would know that all side effects on all drugs had an equal chance of not being reported, and we could still usefully compare the proportions of side-effect reports between each other, and between different drugs.

Unfortunately, different side effects from different drugs are reported at very different rates. A doctor might be more likely to be suspicious of a symptom being a side effect if the patient is on a drug that is new on the market, for example, so those cases may be reported more than side effects for older drugs. Similarly, if a patient develops a side effect that is already well known to be associated with a drug, a doctor will be much
less
likely to bother reporting it, because it’s not an interesting new safety signal, it’s just a boring instance of a well-known phenomenon. And if there are rumours or news stories about problems with a drug, doctors may be more inclined to spontaneously report adverse events, not out of mischief, but simply because they’re more likely to remember prescribing the controversial drug when a patient comes back with an odd medical problem.

Also, a doctor’s suspicions that something is a side effect at all will be much lower if it is a medical problem that happens a lot anyway, as we’ve already seen: people often get headaches, for example, or aching joints, or cancer, in the everyday run of life, so it may not even occur to a doctor that these problems are anything to do with a prescription they’ve given. In any case, these adverse events will be hard to notice against the high background rate of people who suffer from them, and this will all be especially true if they occur a long time after the patient starts on a new drug.

Accounting for these problems is extremely difficult. So spontaneous reporting can be useful if the adverse events are extremely rare without the drug, or are brought on rapidly, or are the kind of thing that is typically found as an adverse drug reaction (a rash, say, or an unusual drop in the number of white blood cells). But overall, although these systems are important, and contribute to a lot of alarms being usefully raised, generally they’re only used to identify suspicions.
39
These are then tested in more robust forms of data.

Better data can come from looking at the medical records of very large numbers of people, in what are known as ‘epidemiological’ studies. In the US this is tough, and the closest you can really get are the administrative databases used to process payments for medical services, which miss most of the detail. In the UK, however, we’re currently in a very lucky and unusual position. This is because our health care is provided by the state, not just free at the point of access, but also through one single administrative entity, the NHS. As a result of this happy accident, we have large numbers of health records that can be used to monitor the benefits and risks of treatments. Although we have failed to realise this potential across the board, there is one corner called the General Practice Research Database, where several million people’s GP records are available. These records are closely guarded, to protect anonymity, but researchers in pharmaceutical companies, regulators and universities have been able to apply for access to specific parts of anonymised records for many years now, to see whether specific medicines are associated with unexpected harms. (Here I should declare an interest, because like many other academics I am doing some work on analysing this GPRD data myself, though not to look at side effects.)

Studying drug safety in the full medical record of patients who receive a prescription in normal clinical practice has huge advantages over spontaneous report data, for a number of reasons. Firstly, you have all of a patient’s medical notes, in coded form, as they appear on the clinic’s computer, without any doctor having to make a decision about whether to bother flagging up a particular outcome.

You also have an advantage over those small approval trials, because you have a
lot
of data, allowing you to look at rare outcomes. And more than that, these are real patients. The people who participate in trials are generally unusual ‘ideal patients’: they’re healthier than real patients, with fewer other medical problems, they’re on fewer other medications, they’re less likely to be elderly, very unlikely to be pregnant, and so on. Drug companies like to trial their drugs in these ideal patients, as healthier patients are more likely to get better and to make the drug look good. They’re also more likely to give that positive result in a briefer, cheaper trial. In fact, this is another way in which database studies can have an advantage: approval trials are generally brief, so they expose patients to drugs for a shorter period of time than the normal duration of a prescription. But database studies give us information on what drugs do in real-world patients, under real-world conditions (and as we shall see, this isn’t just restricted to the issue of side effects).

With this data, you can look for an association between a particular drug and an increased risk of an outcome that is already common, like heart attacks, for example. So you might compare heart-attack risk between patients who have received three different types of foot-fungus medication, for example, if you were worried that one of them might damage the heart. This is not an entirely straightforward business, of course, partly because you have to make important decisions about what you compare with what, and this can affect your outcomes. For example, should you compare people getting your worrying drug against other people getting a similar drug, or against people matched for age but not getting any drug? If you do the latter, are foot-fungus patients definitely comparable with age-matched healthy patients on your database? Or are patients with foot fungus, perhaps, more likely to be diabetic?

You can also get caught out by a phenomenon called ‘channelling’: this is where patients who have reported problems on previous drugs are preferentially given a drug with a solid reputation for being safe. As a result, the patients on the safe drug include many of the patients who are sicker to start with, and so are more likely to report adverse events, for reasons that have nothing to do with the drug. That can end up making the safe drug look worse than it really is; and by extension, it can make a riskier drug look better in comparison.

But in any case, short of conducting massive drug trials in routine care – not an insane idea, as we will see later – these kinds of studies are the best shot we have for making sure that drugs aren’t associated with terrible harms. So they are conducted by regulators, by academics, and often by the manufacturer at the request of the regulator.

In fact, drug companies are under a number of obligations to monitor side effects, both general and specific, and report them to the relevant authority, but in reality these systems often don’t work very well. In 2010, for example, the FDA wrote a twelve-page letter to Pfizer complaining that it had failed to properly report adverse events arising after its drugs came to market.
40
The FDA had conducted a six-week investigation, and found evidence of several serious and unexpected adverse events that had not been reported: Viagra causes serious visual problems, for example, and even blindness. The FDA said Pfizer failed to report these events in a timely fashion, by ‘misclassifying and/or downgrading reports to non-serious, without reasonable justification’. You will remember the paroxetine story from earlier, where GSK failed to report important data on suicide. These are not isolated incidents.

Lastly, you can also get some data on side effects from trials, even though the adverse events we’re trying to spot are rare, and therefore much less likely to appear in small studies. Here again, though, there have been problems. For example, sometimes companies can round up all kinds of different problems into one group, with a label that doesn’t really capture the reality of what was happening to the patients. In antidepressant trials, adverse events like suicidal thoughts, suicidal behaviours and suicide attempts have been coded as ‘emotional lability’, ‘admissions to hospital’, ‘treatment failures’ or ‘drop-outs’.
41
None of these really captures the reality of what was going on for the patient.

Other books

Chronic City by Jonathan Lethem
Chelsea Mansions by Barry Maitland
Angelopolis by Danielle Trussoni
Between Flesh and Steel by Richard A. Gabriel
When Least Expected by Allison B. Hanson
Tomorrow by Graham Swift
Hot Water by Erin Brockovich
Branches of the Willow 3 by Christine M. Butler
Feel Again by Fallon Sousa