Critical thinking for Students (8 page)

Read Critical thinking for Students Online

Authors: Roy van den Brink-Budgen

BOOK: Critical thinking for Students
7.19Mb size Format: txt, pdf, ePub
PERCENTAGES
 

Percentages are often seen as particularly useful because they deal with the possible problem of the disputable significance of overall numbers. Thus, the information that, for every hour that the TV is on, there is a seven per cent decrease in the number of words that a child hears is probably more helpful than knowing that the child hears 770 fewer words. The overall number might seem more dramatically significant but the problem is knowing how significant it really is.

 

There can, however, be issues of significance with percentages. These are found especially when we are looking at percentages involving small numbers. Look at the next argument:

 

At the fifth Teenage Cancer Trust conference held in June 2008, it was reported that between 1979 and 2003, the incidence of cervical cancer had increased by 1.6 per cent per year. But the figure for those aged 15–19 was 6.8 per cent per year. These figures show that it is the increase in teenagers with the disease that is causing the overall increase. Therefore we need to have a campaign to educate teenagers about the dangers of having lots of sexual partners.

 

You can see that the author gives us two different percentages. Both refer to percentage changes over the period 1979–2003. In this way, they might be seen as comparable. In one important way, they are. But might there be a problem in comparing the two? The first covers cases of cervical cancer in all age groups in the UK over a 14-year period. The second covers only the 15–19 age group. This alerts us to why there might be a problem. Obviously the 15–19 age group is significantly smaller in number than that of all age groups (15 to 100+). In addition, we would expect very few 15–19 females to get cervical cancer (indeed, to get any form of cancer). And this turns out to be a devastatingly significant point. It has been calculated that a 6.8 per cent increase in cervical cancer per year for the 15–19 age group represents an actual number of 0.1 to 0.2 cases a year. This means that a 6.8 per cent increase is equivalent to one or two cases every ten years!

 

So when the CEO from the Teenage Cancer Trust said, ‘It is worrying that cervical cancer… (is) increasing in teenagers faster than in other groups. More education is desperately needed so young people can change their behaviour before it’s too late’, we would say, beware of drawing the wrong inference from percentages. (He was also talking about melanoma, an issue we considered when we were looking at sunbeds and suncreams.)

 

There was a similar problem when Critical Thinking was described as the ‘
fastest-growing
A-level in Britain’ a few years ago. This was easily explained. Because it started off from such a low figure, it was easy to be the fastest-
growing
A-level!

 

So when we’re given evidence-claims in the form of a percentage, working out their significance is a task where we need to tread with care.

 

• What is the number from which the percentage is calculated?

 

• When comparing percentages of different groups, are the numbers themselves sufficiently comparable?

 

In addition, when we’re looking at percentage changes over a given timescale, we need to ask these questions:

 


Is the timescale itself problematic?
For example, a timescale could be selected
because
it fits with the author’s position by emphasising a particular percentage
increase
or decrease.

 


What sort of percentage change would we expect, if there was no significant difference from one period to another?
We have to be careful here. It could well be that we would expect an increase or decrease anyway, for all sorts of reasons.

 

Overall truancy rates rose to 1.1 per cent in the Spring term of 2009 compared to 1 per cent for the same term of 2008.

 

Is this significant? The schools spokesman for the Liberal Democrat Party must have thought so, because he described the figures as ‘a disgrace. The Government’s truancy strategies are not working. Ministers have poured hundreds of millions of pounds into reducing truancy over recent years but this money seems to have been completely wasted.’ (
The Times
, 27 August 2009)

 

Our response, as Critical Thinkers, would be to say, ‘Now hold on, David Laws, we need to think more carefully about this. Is a 0.1 per cent increase significant? Does it tell us that the money spent on reducing truancy has been “completely wasted”? What sort of percentage figure are you looking for? Zero per cent? We don’t know what the truancy percentage would have been without the truancy strategies, so perhaps a 0.1 per cent increase is pretty good. Do we need to know what the figure was for years earlier than 2008? Was 2008 an unusually low figure, so that a small increase in 2009 is actually pretty good?’

 

And so on. Quite simply, percentages can be very slippery customers, with their significance sliding through our fingers as we try to grab hold of it. We very often need to know much more before we can start drawing useful inferences.

 
REAL NUMBERS
 

So, what about real numbers? Instead of percentages, what about the numbers from which they’re taken? Do they give us the opportunity to hold on to something much more significant?

 

Look at the next example:

 

Britain spends a higher amount on cosmetic surgery than any other country in Europe. In 2006 this was £497m. The second highest was Italy with £158m. In fact, if we add up the total amount spent by the countries that were second, third (France), fourth (Germany), and fifth (Spain) in the league table of spending, this total is still less than the amount spent in Britain. This shows that British people are the vainest in Europe.

 

There’s a lot of possible evaluation we could do with this argument, especially in terms of alternative explanations giving us different significances for the evidence. One of the evaluation questions we might want to ask is population size. Perhaps the UK population size is sufficiently large to (partly) explain the UK’s position in the European league table. A quick look at the numbers suggests not.

 

The population of the UK is 59.8m; that of Italy is 58.1m; France is 60.7m; Germany is 82.7m; with the only one noticeably lower than the UK being Spain with 43.4m.

 

There would have been little point translating these numbers into percentages: the numbers give us the information we need to see that it is not population size that in itself explains what’s going on. (You still, of course, need to consider alternative explanations in relation to the inference. There are quite a few.)

 

Another example in which numbers themselves give us lots of information is the amount of time spent watching TV. The US tops the international league table here with an average daily household viewing of a little over eight hours. (Second is Turkey with five hours, with the UK way down the table on only three.) There would be no point in translating hours per day into a different measure, given that the number of hours per day does not change.

 

However, sometimes numbers might express something, but we’re not sure what. For example, we know that the number of aid workers killed in 2008 whilst on duty was 140. In 2007, it was 75; in 2006 it was 84. So there was a big increase in aid workers (especially locally-recruited ones) being killed whilst working in 2008. Is this significant? Here we would need to know whether the number of aid workers has increased overall before we could infer something like ‘the risk of death for aid workers has increased’.

 

Sometimes there’s a further way of expressing the possible significance of a numerical claim. This is in the form of a rate. A percentage is a type of rate, expressed as a proportion per hundred. But we can find rates expressed as proportions of larger (sometimes much larger) numbers. When we are dealing with very large numbers, a rate (say, per 10,000, 100,000, or more) makes the possible significance of the information more approachable.

 

For example, does it tell us very much to read that the US spent $607.3 billion on defence in 2008? It tells us something because, by any standards, that’s a lot of money. Is this figure given greater significance when we see that this amount represented 41.5 per cent of the total spent on defence in the world? Probably yes, because we can see that, given this percentage, the US will realistically be the biggest spender on defence of any country in the world.

 

But there’s another measure which gives us a different significance. This is the amount spent on defence in $000s per person in a country’s population. Using that rate measure, the US is not top of the league table. This position is held by Israel. Though Israel spends (only) $16.2 billion on defence, this represents about $2,300 per person in the country. (The US spends almost $2000 per person.) This league table of spending per person creates a very different picture from the overall amount and the percentage figures. For example, China is second in the list of percentage of global defence spending (with 5.8 per cent of all spending) but features nowhere in the top 15 spenders by rate per head of population (because of its massive population). Using this rate measure, a country like Brunei appears in the premier league of defence spenders, though it spends only $0.3 billion on defence.

 

Another rate that can be used is that per person in a group or country. Using this measure shows us that Greece heads the international cigarette smoking league table, with a little over eight cigarettes a day being smoked per person. At one level, this tells us a lot, although using this particular measure doesn’t tell us everything. For example, the French come out 62nd in the league table, with only a little over two cigarettes per person. However, the French figure doesn’t include the 20 per cent of cigarettes sold illegally, so it looks lower than it really is. This last example shows that a per person rate can be distorted by inadequate information. A further illustration is that of India which barely registers on the per person cigarette smoking scale. It comes in as 119th out of 123 countries surveyed. However, tobacco consumption is much higher than this evidence suggests, in that people in India have a fondness for chewing tobacco rather than smoking it.

 

So we can see that the way in which statistical evidence is presented can affect its possible significance. It will not surprise us, then, to see that such evidence is often used to serve an author’s interest. This can be done, for example, by picking a particular year as the starting point for a percentage change comparison in order to produce an artificially low or high change. It can be done by ignoring (or playing down the significance of) counter-evidence, a method called ‘cherry-picking’. This could be going on with the climate change debate, with different pieces of statistical evidence being used on both sides, such that there is evidence to support the claim that climate change is happening, and that it isn’t.

 

Sometimes numbers can have a manufactured significance. If we look at the numbers of deaths of troops in military campaigns, what significance do they have? At the Battle of the Somme in 1916, something like 432,000 British troops were killed. Something like 500,000 German troops were killed. Was this more than expected? Was this an acceptable number? In recent military campaigns (such as in Afghanistan), a casualty rate of more than one a day is seen as significant (and even one a day is). In
The Times
of 29 September 2009, Martin Barrow refers to ‘appalling casualties’ in the war in Afghanistan. How would he then have described the casualty rate of the Somme or at the battle for Stalingrad? What casualty rate wouldn’t then appal? No casualties at all? Just ten or twenty?

 

Beware of the manufactured significance of numbers. Much of what is reported in the news has this significance. Numerical claims, like other claims we’ve been looking at throughout the book, take on a significance only when something is done with them.

 
 
 
 
EVALUATION OF ARGUMENTS: WEAKNESSES IN REASONING
 

In the previous chapter, we focused on the limits (and possibilities) of numerical evidence for inference. We established the point that inferences are normally only probably rather than certainly true. In this chapter we’re going to continue this theme of evaluating the relationship between claims and inferences from them.

 

Other books

Eagle's Redemption by Pape, Cindy Spencer
Soarers Choice by L. E. Modesitt
Love in Lowercase by Francesc Miralles
She Who Waits (Low Town 3) by Polansky, Daniel
Eve Vaughn by The Zoo
Next Door Neighbors by Hoelsema, Frances
The Belly of the Bow by K J. Parker
11 by Kylie Brant
Searches & Seizures by Stanley Elkin