I Think You'll Find It's a Bit More Complicated Than That (25 page)

BOOK: I Think You'll Find It's a Bit More Complicated Than That
13.24Mb size Format: txt, pdf, ePub

There isn’t space to debunk it in this column (and, bafflingly, most aspects of it are already debunked by the main report it accompanies). But if you want one good illustration of their approach, then you might want to read the bit
where they talk about, well, me
:

We were greatly concerned to read in the
Guardian
on 27 October an article
clearly aimed at undermining the credibility of Professor John Wyatt which contained detailed information about Wyatt’s evidence … which could only have been passed on to the journalist concerned by a member of the select committee. There should be an inquiry about how this information got into the public domain and as to whether such a personal attack represents a serious breach of parliamentary procedure.

My article did contain detailed information about Prof Wyatt’s evidence. But I suspect that any inquiry set up to examine how I managed to obtain that information would finish its work well before the first set of tea and biscuits arrived, since all the facts came from the written and oral evidence, published openly and in full on the parliament.uk website during the select committee hearing. I downloaded the PDF, and then I read it. If there is a lesson for parliamentarians here, it is a simple one: we’re watching you, and we’re allowed to.

Building Evidence into Education
1

I think there is a huge prize waiting to be claimed by teachers. By collecting better evidence about what works best, and establishing a culture where this evidence is used as a matter of routine, we can improve outcomes for children, and increase professional independence.

This is not an unusual idea. Medicine has leapt forward with evidence-based practice, because it’s only by conducting ‘randomised trials’ – fair tests, comparing one treatment against another – that we’ve been able to find out what works best. Outcomes for patients have improved as a result, through thousands of tiny steps forward. But these gains haven’t been won simply by doing a few individual trials, on a few single topics, in a few hospitals here and there. A change of culture was also required, with more education about evidence for medics, and whole new systems to run trials as a matter of routine, to identify questions that matter to practitioners, to gather evidence on what works best, and then, crucially, to get it read, understood, and put into practice.

I want to persuade you that this revolution could – and should – happen in education. There are many differences between medicine and teaching, but they also have a lot in common. Both involve craft and personal expertise, learnt over years of experience. Both work best when we learn from the experiences of others, and what worked best for them. Every child is different, of course, and every patient is different too; but we are all similar enough that research can help find out which interventions will work best overall, and which strategies should be tried first, second or third, to help everyone achieve the best outcome.

Before we get that far, though, there is a caveat: I’m a doctor. I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly. Because of that, there are two things we should be clear on.

Firstly, evidence-based practice isn’t about telling teachers what to do – in fact, quite the opposite. This is about empowering teachers, and setting a profession free from governments, ministers and civil servants who are often overly keen on sending out edicts, insisting that their new idea is the best in town. Nobody in government would tell a doctor what to prescribe, but we all expect doctors to be able to make informed decisions about which treatment is best, using the best currently available evidence. I think teachers could one day be in the same position.

Secondly, doctors didn’t invent evidence-based medicine. In fact, quite the opposite is true: just a few decades ago, best medical practice was driven by things like eminence, charisma and personal experience. We needed the help of statisticians, epidemiologists, information librarians and experts in trial design to move forwards. Many doctors – especially the most senior ones – fought hard against this, regarding ‘evidence-based medicine’ as a challenge to their authority.

In retrospect, we’ve seen that these doctors were wrong. The opportunity to make informed decisions about what works best, using good-quality evidence, represents a truer form of professional independence than any senior figure barking out his opinion. A coherent set of systems for evidence-based practice listens to people on the front line, to find out where the uncertainties are, and decide which ideas are worth testing. Lastly, crucially, individual judgement isn’t undermined by evidence: if anything, informed judgement is back in the foreground, and hugely improved.

This is the opportunity that I think teachers might want to take up. Because some of these ideas might be new to some readers, I’ll describe the basics of a randomised trial, but after that, I’ll describe the systems and structures that exist to support evidence-based practice, which are in many ways more important. There is no need for a world where everyone is suddenly an expert on research, running trials in their classroom tomorrow: what matters is that most people understand the ideas, that we remove the barriers to ‘fair tests’ of what works, and that evidence can be used to improve outcomes.

How randomised trials work

Where they are feasible, randomised trials are generally the most reliable tool we have for finding out which of two interventions works best. We simply take a group of children, or schools (or patients, or people); we split them into two groups at random; we give one intervention to one group, and the other intervention to the other group; then we measure how each group is doing, to see if one intervention achieved its supposed outcome any better.

This is how medicines are tested, and in most circumstances it would be regarded as dangerous for anyone to use a treatment today, without ensuring that it had been shown to work well in a randomised trial. Trials are not only used in medicine, however, and it is common to find them being used in fields as diverse as web design, retail, government, and development work around the world.

For example, there was a long-standing debate about which of two competing models of ‘microfinance’ schemes was best at getting people out of poverty in India, whilst ensuring that the money was paid back, so it could be re-used in other villages: a randomised trial compared the two models, and established which was best.

At the top of the page at Wikipedia, when it is having a funding drive, you can see the smiling face of Jimmy Wales, the founder, on a fundraising advert. He’s a fairly shy person, and didn’t want his face to be on these banners. But Wikipedia ran a randomised trial, assigning visitors to different adverts: some saw an advert with a child from the developing world (‘She could have access to all of human knowledge if you donate …’); some saw an attractive young intern; some saw Jimmy Wales. The adverts with Wales got more clicks and more donations than the rest, so they were used universally.

It’s easy to imagine that there are ways around the inconvenience of randomly assigning people, or schools, to one intervention or another: surely, you might think, we could just look at the people who are already getting one intervention, or another, and simply monitor their outcomes to find out which is the best. But this approach suffers from a serious problem. If you don’t randomise, and just observe what’s happening in classrooms already, then the people getting different interventions might be very different from each other, in ways that are hard to measure.

For example, when you look across the country, children who are taught to read in one particularly strict and specific way at school may perform better on a reading test at age seven, but that doesn’t necessarily mean that the strict, specific reading method was responsible for their better performance. It may just be that schools with more affluent children, or fewer social problems, are more able to get away with using this (imaginary) strict reading method, and their pupils were always going to perform better on reading tests at age seven.

This is also a problem when you are rolling out a new policy, and hoping to find out whether it works better than what’s already in place. It is tempting to look at results before and after a new intervention is rolled out, but this can be very misleading, as other factors may have changed at the same time. For example, if you have a ‘back to work’ scheme that is supposed to get people on benefits back into employment, it might get implemented across the country at a time when the economy is picking up anyway, so more people will be finding jobs, and you might be misled into believing that it was your ‘back to work’ scheme that did the job (at best, you’ll be tangled up in some very complex and arbitrary mathematical modelling, trying to discount for the effects of the economy picking up).

Sometimes people hope that running a pilot is a way around this, but this is also a mistake. Pilots are very informative about the practicalities of whether your new intervention can be implemented, but they can be very misleading on the benefits or harms, because the centres that participate in pilots are often different from the centres that don’t. For example, job centres participating in a ‘back to work’ pilot might be less busy, or have more highly motivated staff: their clients were always going to do better, so a pilot in those centres will make the new jobs scheme look better than it really is. Similarly, running a pilot of a fashionable new educational intervention in schools that are already performing well might make the new idea look fantastic, when in reality the good results have nothing to do with the new intervention.

This is why randomised trials are the best way to find out how well a new intervention works: they ensure that the pupils or schools getting a new intervention are the same as the pupils and schools still getting the old one, because they are all randomly selected from the same pool.

At around this point, most people start to become nervous: surely it’s wrong, for example, to decide what kind of education a child gets, simply at random? This cuts to the core of why we do trials, and why we gather evidence on what works best.

Myths about randomised trials

While there are some situations where trials aren’t appropriate – and where we need to be cautious in interpreting the results – there are also several myths about trials. These myths are sometimes used to prevent trials being done, which slows down progress, and creates harm, by preventing us from finding out what works best. Some people even claim that trials are undesirable, and even completely impossible, in schools: this is a peculiarly local idea, and there have been huge numbers of trials in education in other countries, such as the US. However, the specific myths are worth discussing.

Firstly, people sometimes worry that it is unethical to randomly assign children to one educational intervention or another. Often this is driven by an implicit belief that a new or expensive intervention is always necessarily better. When people believe this, they also worry that it’s wrong to deprive people of the new intervention. It’s important to be clear, before we get to the detail, that a trial doesn’t necessarily involve depriving people of anything, since we can often run a trial where people are randomly assigned to receive the new intervention now, or after a six-month wait. But there is a more important reason why trials are ethically acceptable: in reality, before we do a trial, we generally have no idea which of two interventions is best. Furthermore, new things that many people believe in can sometimes turn out, in reality, to be very harmful.

Medicine is littered with examples of this, and it is a frightening reality. For many years, it was common to treat everyone who had a serious head injury with steroids. This made perfect sense on paper: head injuries cause the brain to swell up, which can cause important structures to be crushed inside our rigid skulls; but steroids reduce swelling (this is why you have steroid injections for a swollen knee), so they should improve survival. Nobody ran a trial on this for many years. In fact, it was widely argued that randomising unconscious patients in A&E to have steroids or not would be unethical and unfair, so trials were actively blocked. When a trial was finally conducted, it turned out that steroids actually increased the chances of dying after a head injury. The new intervention, that made perfect sense on paper, that everyone believed in, was killing people: not in large enough numbers to be immediately obvious, but when the trial was finally done, an extra two people died out of every hundred given steroids.

There are similar cases from the world of education. The ‘Scared Straight’ programme also made sense on paper: young children were taken into prisons and shown the consequences of a life of crime, in the hope that they would be more law-abiding in their own lives. Following the children who participated in this programme into adult life, it seemed they were less likely to commit crimes, when compared with other children. But here, researchers were caught out by the same problem discussed above: the schools – and so the children – who went on the Scared Straight course were different from the children who didn’t. When a randomised trial was finally done, where this error could be accounted for, we found out that the Scared Straight programme – rolled out at great expense, with great enthusiasm, good intentions and huge optimism – was actively harmful, making children more likely to go to prison in later life.

So we must always be cautious about assuming that things which are new, or expensive, are necessarily always better. But this is just one special case of a broader issue: we should always be clear when we are uncertain about which intervention is best. Right now, there are huge numbers of different interventions used throughout the country – different strategies to reduce absenteeism, or teach arithmetic, or reduce teenage pregnancies, or any number of other things – where there is no evidence to say which of the currently used methods is best. There is arbitrary variation, across the country, across a town, in what strategies and methods are used, and nobody worries that there is an ethical problem with this.

Other books

The Slipper by Jennifer Wilde
Wicked Wager by Beverley Eikli
Little Cowgirl Needs a Mom by Thayer, Patricia
Unbreak my Heart by Johannesen, I. R.
The Pyramid of Souls by Erica Kirov
The Devil's Playground by Stav Sherez
Continuance by Carmichael, Kerry
Mordraud, Book One by Fabio Scalini
Lost Wishes by Kelly Gendron