The Extended Phenotype: The Long Reach of the Gene (Popular Science) (27 page)

BOOK: The Extended Phenotype: The Long Reach of the Gene (Popular Science)
3.31Mb size Format: txt, pdf, ePub

We made a detailed and exhaustive inventory of the time spent by each wasp on each burrow with which she was associated. We divided each individual female’s adult lifetime into consecutive episodes of known duration, each episode being designated a digging episode if the wasp concerned began her association with the burrow by digging it. Otherwise, it was designated an entering episode. The end of each episode was signalled by the wasp’s leaving the nest for the last time. This instant was also treated as the start of the next burrow episode, even though the next burrow site had not, at the time, been chosen. That is to say, in our time accounting, the time spent searching for a new burrow to enter, or searching for a place to dig a new burrow, was designated retroactively as time ‘spent’ on that new
burrow. It was added to the time subsequently spent provisioning the burrow with katydids, fighting other wasps, feeding, sleeping, etc., until the wasp left the new burrow for the last time.

At the end of the season, therefore, we could add up the total number of wasp hours spent on dug-burrow episodes, and also the total number of wasp hours spent on entered-burrow episodes. For the New Hampshire study these two figures were 8518.7 hours and 6747.4 hours, respectively. This is regarded as time spent, or invested, for a return, and the return is measured in numbers of eggs. The total number of eggs laid at the end of the dug-burrow episodes (i.e. by wasps that had dug the burrow concerned) in the whole New Hampshire population during the year of study was 82. The corresponding number for entered-burrow episodes was 57 eggs. The success rate of the digging subroutine was, therefore, 82/8518.7 = 0.96 eggs per 100 hours. The success rate of the entering subroutine was 57/6747.4 = 0.84 eggs per 100 hours. These success scores are averaged across all the individuals who used the two subroutines. Instead of counting the number of eggs laid in her lifetime by an individual wasp—the equivalent of measuring the wheat yield of each of the ten fields in the analogy—we count the number of eggs laid ‘by’ the digging (or entering) subroutine per unit ‘running time’ of the subroutine.

There is another respect in which it would have been difficult for us to have done this analysis if we had insisted on thinking in terms of individual success. In order to solve the equation to predict the equilibrium entering frequency, we had to have empirical estimates of the expected payoffs of each of the four ‘outcomes’ (abandons, remains alone, is joined, joins). We obtained payoff scores for the four outcomes in the same way as we obtained success scores for each of the two strategies, dig and enter. We averaged over all individuals, dividing the total number of eggs laid in each outcome by the total time spent on episodes that ended up in that outcome. Since most individuals experienced all four outcomes at different times, it is not clear how we could have obtained the necessary estimates of outcome payoffs if we had thought in terms of individual success.

Notice the important role of
time
in the computation of the ‘success’ of the digging and entering subroutines (and of the payoff given by each outcome). The total number of eggs laid ‘by’ the digging subroutine is a poor measure of success until it has been divided by the time spent on the subroutine. The number of eggs laid by the two subroutines might be equal, but if digging episodes are on average twice as long as entering episodes, natural selection will presumably favour entering. In fact rather more eggs were laid ‘by’ the digging subroutine than by the entering one, but correspondingly more time was spent on the digging subroutine so the overall success rates of the two were approximately equal. Notice too that we do not specify whether the extra time spent on digging is accounted for by a greater number of wasps
digging, or by each digging episode lasting longer. The distinction may be important for some purposes, but it doesn’t matter for the kind of economic analysis we undertook.

It was clearly stated in the original paper (Brockmann, Grafen & Dawkins 1979), and must be repeated here, that the method we used depended upon some assumptions. We assumed, for instance, that a wasp’s choice of subroutine on any particular occasion did not affect her survival or success rate after the end of the episode concerned. Thus the costs of digging were assumed to be reflected totally in the time spent on digging episodes, and the costs of entering reflected in the time spent on entering episodes. If the act of digging had imposed some extra cost, say a risk of wear and tear to the limbs, shortening life expectation, our simple time-cost accounting would need to be amended. The success rates of the digging and the entering subroutines would have to be expressed, not in eggs per hour, but in eggs per ‘opportunity cost’. Opportunity cost might still be measured in units of time, but digging time would have to be scaled up in a costlier currency than entering time, because each hour spent in digging shortens the expectation of effective life of the individual. Under such circumstances it might be necessary, in spite of all the difficulties, to think in terms of individual success rather than subroutine success.

It is for this kind of reason that Clutton-Brock
et al
. (1982) are probably wise in their ambition to measure the total lifetime reproductive success rates of their individual red deer stags. In the case of Brockmann’s wasps, we have reason to think that our assumptions were correct, and that we were justified in ignoring individual success and concentrating on subroutine success. Therefore what N. B. Davies, in a lecture, jocularly called the ‘Oxford method’ (measuring subroutine success) and the ‘Cambridge method’ (measuring individual success) may each be justified in different circumstances. I am not saying that the Oxford method should always be used. The very fact that it is
sometimes
preferable is sufficient to answer the claim that field workers interested in measuring costs and benefits always have to think in terms of
individual
costs and benefits.

When computer chess tournaments are held, a layman might imagine that one computer plays against another. It is more pertinent to describe the tournament as being between programs. A good program will consistently beat a poor program, and it doesn’t make any difference which physical computer either program is running on. Indeed the two programs could swap physical computers every other game, each one running alternately in an IBM and an ICL computer, and the result at the end of the tournament will be the same as if one program consistently ran in the IBM and the other consistently ran in the ICL. Similarly, to return to the analogy at the beginning of this chapter, the digging subroutine ‘runs’ in a large number of different physical wasp nervous systems. Entering is the name of a rival
subroutine which also runs in many different wasp nervous systems, including some of the same physical nervous systems as, at other times, run the digging subroutine. Just as a particular IBM or ICL computer functions as the physical medium through which any of a variety of chess programs can act out their skills, so one individual wasp is the physical medium through which sometimes the digging subroutine, at other times the entering subroutine, acts out its characteristic behaviour.

As already explained, I call digging and entering ‘subroutines’, rather than programs, because we have already used ‘program’ for the overall lifetime choosing rules of an individual. An individual is regarded as being programmed with a rule for choosing the digging or the entering subroutine with some probability
p
. In the special case of a polymorphism, where each individual is either a lifelong digger or a lifelong enterer,
p
becomes 1 or 0, and the categories program and subroutine become synonymous. The beauty of calculating the egg-laying success rates of subroutines rather than of individuals is that the procedure we adopt is the same regardless of where on the mixed strategy continuum our animals are. Anywhere along the continuum we still predict that the digging subroutine should, at equilibrium, enjoy a success rate equal to that of the entering subroutine.

It is tempting, though rather misleading, to push this line of thought to what appears to be its logical conclusion, and think in terms of selection acting directly on subroutines in a subroutine pool. The population’s nervous tissue, its distributed computer hardware, is inhabited by many copies of the digging subroutine and many copies of the entering subroutine. At any given time the proportion of running copies of the digging subroutine is
p
. There is a critical value of
p
, called
p*
, at which the success rate of the two subroutines is equal. If either of the two becomes too numerous in the subroutine pool, natural selection penalizes it and the equilibrium is restored.

The reason this is misleading is that selection really works on the differential survival of alleles in a gene pool. Even with the most liberal imaginable interpretation of what we mean by gene control, there is no useful sense in which the digging subroutine and the entering subroutine could be thought of as being controlled by alternative alleles. If for no other reason, this is because the wasps, as we have seen, are not polymorphic, but are programmed with a stochastic rule for choosing to dig or enter on any given occasion. Natural selection must favour genes that act on the stochastic program of individuals, in particular controlling the value of
p
, the digging probability. Nevertheless, although it is misleading if taken too literally, the model of subroutines competing directly for running time in nervous systems provides some useful short cuts to getting the right answer.

The idea of selection in a notional pool of subroutines also leads us to think about yet another time-scale on which an analogue of frequency-dependent
selection might occur. The present model allows that from day to day the observed number of running copies of the digging subroutine might change, as individual wasps obeying their stochastic programs switch their hardware from one subroutine to another. So far I have implied that a given wasp is born with a built-in predilection to dig with a certain characteristic probability. But it is also theoretically possible that wasps might be equipped to monitor the population around them with their sense organs, and choose to dig or enter accordingly. In ESS jargon focusing on the individual level, this would be regarded as a conditional strategy, each wasp obeying an ‘if clause’ of the following form: ‘If you see a large amount of entering going on around you, dig, otherwise enter.’ More practically, each wasp might be programmed to follow a rule of thumb such as: ‘Search for a burrow to enter; if you have not found one after a time
t
, give up and dig your own.’ As it happens our evidence goes against such a ‘conditional strategy’ (Brockmann & Dawkins 1979), but the theoretical possibility is interesting. From the present point of view what is particularly interesting is this. We could still analyse the data in terms of a notional selection between subroutines in a subroutine pool, even though the selection process leading to the restoration of the equilibrium when perturbed would not be natural selection on a generational time-scale. It would be a developmentally stable strategy or DSS (Dawkins 1980) rather than an ESS, but the mathematics could be much the same (Harley 1981).

I must warn that analogical reasoning of this sort is a luxury that we dare not indulge unless we are capable of clearly seeing the limitations of the analogy. There are real and important distinctions between Darwinian selection and behavioural assessment, just as there were real and important distinctions between a balanced polymorphism and a true mixed evolutionarily stable strategy. Just as the value of
p
, the individual’s digging probability, was considered to be adjusted by natural selection, so, in the behavioural assessment model,
t
, the individual’s criterion for responding to the frequency of digging in the population, is presumably influenced by natural selection. The concept of selection among subroutines in a subroutine pool blurs some important distinctions while pointing up some important similarities: the weaknesses of this way of thinking are linked to its strengths. What I do remember is that, when we were actually wrestling with the difficulties of the wasp analysis, one of our main leaps forward occurred when, under the influence of A. Grafen, we kicked the habit of worrying about individual reproductive success and switched to an imaginary world where ‘digging’ competed directly with ‘entering’; competed for ‘running time’ in future nervous systems.

This chapter has been an interlude, a digression. I have not been trying to argue that ‘subroutines’, or ‘strategies’ are really true replicators, true units of natural selection. They are not. Genes and fragments of genomes are true
replicators. Subroutines and strategies can be thought of for certain purposes as if they were replicators, but when those purposes have been served we must return to reality. Natural selection really has the effect of choosing between alleles in wasp gene-pools, alleles which influence the probability that individual wasps will enter or dig. We temporarily laid this knowledge aside and entered an imaginary world of ‘inter-subroutine selection’ for a specific methodological purpose. We were justified in doing this because we were able to make certain assumptions about the wasps, and because of the already demonstrated mathematical equivalence between the various ways in which a mixed evolutionarily stable strategy can be put together.

As in the case of
Chapter 4
, the purpose of this chapter has been to undermine our confidence in the individual-centred view of teleonomy, in this case by showing that it is not always useful, in practice, to measure individual success if we are to study natural selection in the field. The next two chapters discuss adaptations which, by their very nature, we cannot even begin to understand if we insist on thinking in terms of individual benefit.

8 Outlaws and Modifiers

Natural selection is the process whereby replicators out-propagate each other. They do this by exerting phenotypic effects on the world, and it is often convenient to see those phenotypic effects as grouped together in discrete ‘vehicles’ such as individual organisms. This gives substance to the orthodox doctrine that each individual body can be thought of as a unitary agent maximizing one quantity—‘fitness’, various notions of which will be discussed in
Chapter 10
. But the idea of individual bodies maximizing one quantity relies on the assumption that replicators at different loci within a body can be expected to ‘cooperate’. In other words we must assume that the allele that survives best at any given locus tends to be the one that is best for the genome as a whole. This is indeed often the case. A replicator that ensures its own survival and propagation down the generations by conferring on its successive bodies resistance to a dangerous disease, say, will thereby benefit all the other genes in the successive genomes of which it is a member. But it is also easy to imagine cases where a gene might promote its own survival while harming the survival chances of most of the rest of the genome. Following Alexander and Borgia (1978) I shall call such genes
outlaws
.

Other books

The Three Rs by Ashe Barker
Demon's Door by Graham Masterton
Anticipation by Sarah Mayberry
Can You Keep a Secret? by R. L. Stine
Flint Lock (Witches of Karma #10) by Elizabeth A Reeves
Ex-Kop by Hammond, Warren
Intimate by Jason Luke
Armageddon by James Patterson, Chris Grabenstein