I Think You'll Find It's a Bit More Complicated Than That (8 page)

BOOK: I Think You'll Find It's a Bit More Complicated Than That
6.43Mb size Format: txt, pdf, ePub

Guns don’t kill people, puppies do. The world is a really big place.

Datamining for Terrorists
Would Be Lovely If It Worked

Guardian
, 28 February 2009

This week Sir David Omand
, the former Whitehall Security and Intelligence Co-ordinator,
described how the state
should analyse data about individuals in order to find terrorist suspects: travel information, tax and phone records, emails, and so on. ‘Finding out other people’s secrets is going to involve breaking everyday moral rules,’ he said, because we’ll need to screen everyone to find the small number of suspects.

There is one very significant issue that will always make datamining unworkable when used to search for terrorist suspects in a general population, and that is what we might call the ‘baseline problem’: even with the most brilliantly accurate test imaginable, your risk of false positives increases to unworkably high levels, as the outcome you are trying predict becomes rarer in the population you are examining. This stuff is tricky but important. If you pay attention you will understand it.

Let’s imagine you have an amazingly accurate test, and each time you use it on a true suspect, it will correctly identify them as such eight times out of ten (but miss them two times out of ten); and each time you use it on an innocent person, it will correctly identify them as innocent nine times out of ten, but incorrectly identify them as a suspect one time out of ten.

These numbers tell you about the chances of a test result being accurate, given the status of the individual, which you already know (and the numbers are a stable property of the test). But you stand at the other end of the telescope: you have the result of a test, and you want to use that to work out the status of the individual. That depends entirely on how many suspects there are in the population being tested.

If you have ten people, and you know that one of them is a suspect, and you assess them all with this test, then you will correctly get your one true positive and – on average – one false positive. If you have a hundred people, and you know that one is a suspect, you will get your one true positive and, on average, ten false positives. If you’re looking for one suspect among a thousand people, you will get your suspect, and a hundred false positives. Once your false positives begin to dwarf your true positives, a positive result from the test becomes pretty unhelpful.

Remember this is a screening tool, for assessing dodgy behaviour, spotting dodgy patterns, in a general population. We are invited to accept that everybody’s data will be surveyed and processed, because MI5 has clever algorithms to identify people who were never previously suspected. There are 60 million people in the UK, with, let’s say, 10,000 true suspects. Using your unrealistically accurate imaginary screening test, you get 6 million false positives. At the same time, of your 10,000 true suspects, you miss 2,000.

If you raise the bar on any test, to increase
what statisticians call the ‘specificity’
, and thus make it less prone to false positives, then you also make it much less sensitive, so you start missing even more of your true suspects (remember, you’re already missing two in ten of them).

Or do you just want an even more stupidly accurate imaginary test, without sacrificing true positives? It won’t get you far. Let’s say you incorrectly identify an innocent person as a suspect one time in a hundred: you still get 600,000 false positives, out of the UK population. One time in a thousand? Come on. Even with these unfeasibly accurate imaginary tests, when you screen a general population as proposed, it is hard to imagine a point where the false positives are usefully low, and the true positives are not missed. And our imaginary test really was ridiculously good: it’s a very difficult job to identify suspects just from slightly abnormal patterns in the normal things that everybody does.

Then it gets worse. These suspects are undercover operatives: they’re trying to hide from you, they know you’re datamining, so they will go out of their way to produce trails which can confuse you.

And lastly, there’s the problem of validating your algorithms, and calibrating your detection systems. To do that, you need training data: 10,000 people where you know for definite if they are suspects or not, to compare your test results against. It’s hard to picture how that can be done.

I’m not saying you shouldn’t spy on everyday people; obviously I have a view, but I’m happy to leave the morality and the politics to those less nerdy than me. I’m just giving you the maths on specificity, sensitivity and false positives.

Benford’s Law
: Using Stats to Bust an Entire Nation for Naughtiness

Guardian
, 17 September 2011

This week we might bust an entire nation for handing over dodgy economic statistics. But first: why would they bother? Well, it turns out that whole countries have an interest in distorting their accounts, just like companies and individuals. If you’re a Euro member like Greece, for example, you have to comply with various economic criteria, and there’s the risk of sanctions if you miss them.

Government figures are subjected to various forms of audit already, of course, but alongside checking that things marry up with each other, forensic statisticians also have a few interesting tricks to try to spot suspicious patterns in the raw numbers, and so estimate the chances that figures from a set of accounts have been tampered with. One of the cleverest tools is
something called Benford’s law
.

Imagine you have the data on, say, the population of every country in the world. Now, take only the ‘leading digit’ from each number: the first number in the number, if you like. For the UK population, which was
61,838,154 in 2009
, that leading digit would be six. Andorra’s was 85,168, so that’s eight. And so on.

If you take all those leading digits, from all the countries, then overall you might naïvely expect to see the same number of ones, fours, nines and so on. But in fact, for naturally occurring data, you get more ones than twos, more twos than threes, and so on, all the way down to nine. This is Benford’s law: the distribution of leading digits follows a logarithmic distribution, so you get a ‘one’ most commonly, appearing as first digit around 30 per cent of the time, and a nine as first digit only 5 per cent of the time.

The next time you’re waiting for a bus, you can
think about why this happens
(bear in mind what leading digits do when quantities repeatedly double, perhaps). Reality agrees with this theory pretty neatly, and if you go to the website
testingbenfordslaw.com
you’ll see the proportions of each leading digit from lots of real-world datasets, graphed alongside what Benford’s law predicts they should be, with data ranging from
Twitter users’ follower counts
to the number of
books in different libraries
across the US.

Benford’s law doesn’t work perfectly: it only works when you’re examining groups of numbers that span several orders of magnitude. So, for example, for the age in years of the graduate working population, which goes from around twenty to seventy, it wouldn’t be much good; but for personal savings, from nothing to millions, it should work fine. And of course Benford’s law works in other counting systems, so if three-fingered sloths ever developed numeracy, and counted in base 6, or maybe base 12, the law would still hold.

This property of naturally occurring data has been used to check for dubious behaviour in figures for four decades now: it was first used on socioeconomic data submitted to support planning applications, and then on company accounts; it’s even admissible in US courts. In 2009 an economist from the Bundesbank suggested using Benford’s law on
countries’ economic data
, and last month
the results were published
(
hat-tip to Tim Harford
for the paper).

Researchers took
macroeconomic data
on all twenty-seven EU nations, looking specifically at the accounting data that countries have to hand over for monitoring, which is all posted for free at the
online repository Eurostat
: things like government deficit, debt, revenue, expenditure, and so on. Then they took just the first digits from all the numbers, and checked to see if they deviated from what you would predict using Benford’s law.

The results were fun. Greece – whose economy has tanked – showed the largest and most suspicious deviation from Benford’s law of any country in the Euro.

This isn’t a massive surprise: the EU has
run several investigations
into Greece’s numbers already, and the ones from 2005 to 2008 were repeatedly revised upwards after the fact. But it’s neat, and if you wanted to while away a very nerdy afternoon, could even download the data, for free from Eurostat, and
repeat the analysis for yourself
. Joy!

The Certainty of Chance

Guardian
, 6 September 2008

Britain’s happiest places have been mapped by scientists, according to the BBC: Edinburgh is the most miserable place in the country, and they were overbrimming with technical details on exactly how miserable we are in each area of Britain. The story struck a chord, and was lifted by journalists throughout the nation, as we cheerfully castigated ourselves. ‘Misera-Poole?’ asked the
Dorset Echo
. ‘No Smiles in Donny’, said
Doncaster Today
.

From the
Bromley Times
, through Bexley, Dartford and Gravesham, to the
Hampshire Chronicle
, everyone was keen to analyse and explain their ranking. ‘Basingstoke lacks any sense of community or heart,’ said Reverend Dr Derek Overfield, industrial chaplain for the area. And so on.

Exactly what kind of data is the good reverend explaining there?
The Times
had some methodological information: ‘Researchers at Sheffield and Manchester universities based their findings on more than 5,000 responses from the annual British Household Panel Survey.’ According to the BBC it was presented in a lecture at some geographical society. ‘However,’ it said quietly, ‘the researchers stress that the variations between different places in Britain are not statistically significant.’

Here, nestled away, halfway through the gushing barrage of data and facts, was an unmarked confession: this entire news story was based on nothing more than random variation.

There are many reasons why you might see differences between different areas in your survey data on how miserable people are, and people being differently miserable is only one explanation. There might also be, of course, the play of chance: 5,000 people in 274 areas doesn’t give you many in each town – fewer than twenty, in fact – so you might just happen to have picked out more miserable people in Edinburgh, and miss the fact that misery is uniformly distributed throughout the country.

This is called sampling error, and it quietly undermines almost every piece of survey data ever covered in any newspaper. Although the phenomenon has spawned a fiendish area of applied maths called ‘statistics’, the basic principles are best understood with a simple game.

Dr W. Edwards Deming was a charismatic American management guru who railed against performance-related pay on the grounds that it arbitrarily rewarded luck. Working in a theatrical field, he demonstrated his ideas with a simple piece of stagecraft he called the Red Bead Experiment.

Deming would appear at management conferences with a big trough containing thousands of beads which were mostly white, but 20 per cent were red. Eight volunteers were then invited up on stage from the audience of management drones: three to be managers, and five to be workers. ‘Your job,’ Deming explained solemnly, ‘is to make white beads.’

He then produced a paddle with fifty holes cut into it, which was passed to each ‘worker’ in turn. They dipped the paddle into the trough, wiggled it around, and tried to produce as many white beads as they could manage through this entirely random process.

‘Go and show the inspectors,’ Deming would say sternly.

‘Only five red beads, well done! Fourteen red beads? I think we need to re-evaluate your skill set.’ Workers were sacked, promoted, retrained and redeployed, to great amusement.

We ignore basic principles like sampling error at our peril, because the illusion of control, which we all carry around for the sake of sanity, is more powerful than we think, and countless workers have had their lives turned to misery for the simple crime of pulling out fifteen red beads.

Back in the world of misery, were the journalists blameless, and guilty only of ignorance? For any individual, nobody can tell. But Dr Dimitris Ballas, the academic who did the research, has a clue: ‘I tried to explain issues of significance to the journalists who interviewed me. Most,’ he says, ‘did not want to know.’

Sampling Error
, the Unspoken Issue Behind Small Number Changes in the News

Other books

Always by Stover, Deb
Murder and Mayhem by Hamilton, B L
Ghosts of Columbia by L.E. Modesitt Jr.
Bookworm by Christopher Nuttall
The Druid of Shannara by Terry Brooks