Homo Mysterious: Evolutionary Puzzles of Human Nature (47 page)

Read Homo Mysterious: Evolutionary Puzzles of Human Nature Online

Authors: David P. Barash

Tags: #Non-Fiction, #Science, #21st Century, #Anthropology, #v.5, #Amazon.com, #Retail, #Cultural History, #Cultural Anthropology

BOOK: Homo Mysterious: Evolutionary Puzzles of Human Nature
10.82Mb size Format: txt, pdf, ePub

If so, then we should also consider yet another possible driver—or at least, facilitator—of human intelligence: cooking.
Cooking?

It’s less weird than you think. Cooking is intimately connected to fire, the “conquest” of which has long been intuited as crucial, although until recently, its actual biological payoff was obscure. Primatologist Richard Wrangham (a student of Jane Goodall) has proposed that it isn’t fire as such, but cooking that made us human, big brains and all.
24

Wrangham points out that the greatest change in brain size leading to
Homo sapiens
occurred in the transition from
Homo habilis
to
Homo erectus
. And perhaps not coincidentally, ancestral jaws,
teeth, and intestines became smaller at the same time that our brains got bigger.

Cooked food increases food safety by killing pathogens; it expands and enhances taste and retards spoilage. It enables us to eat foods that would otherwise be simply too tough. But none of these advantages is as important as a little-appreciated aspect: Cooking increases the amount of energy our bodies obtain from food. “The extra energy,” Wrangham argues,

gave the first cooks biological advantages. They survived and reproduced better than before. Their genes spread. Their bodies responded by biologically adapting to cooked food, shaped by natural selection to take maximum advantage of the new diet. There were changes in anatomy, physiology, ecology, life history, psychology and society.

 

As Wrangham sees it, by cooking meat or plants or both, digestion was facilitated, allowing our guts to grow smaller and more efficient. It would also have modified our time and energy budgets. Thus, he calculates that if our ancestors were limited to raw food—much of it only barely digestible and whose calories and nutrients are otherwise largely unavailable—they would have been obligated to spend many hours each day just chewing. There are animals, of course, who do just that: cattle, howler monkeys, and tree sloths, for example. Clearly, not all animals require the high-quality, nutrient-dense foods that cooking made available to our forebears. Termites do quite well digesting cellulose. But they aren’t very intelligent, or rather, their cleverness is limited to their evolved biochemical techniques of breaking down wood.

Digestion itself requires far more energy than most people imagine. By getting much of that energy from fire and using it instead of our own bodily resources to unlock much of the nutritional value in food, ancestral cooks freed up additional energy to grow big brains, which, as we have noted, consume enormous amounts of energy. One can spin the story farther, as Wrangham does: Fire’s warmth would have facilitated our shedding body hair, which in turn enabled us to run farther without overheating. Instead of eating on the run or at the immediate scene of a kill or patch of yummy tubers, we would have gathered around a cooking fire, thereby perhaps emphasizing the need for benevolent
socialization—maybe even table manners. Insofar as the cooking fire also provided protection, it might have enabled our ancestors to sleep on the ground instead of in the trees, while further terrestrial adaptations—such as bipedalism—could have facilitated additional tool use and the ability to carry food, wood, tools, babies, and so on. The cooking hypothesis has the added benefit of reversing the standard assumption—namely, that human beings created technology—by suggesting that technology created human beings. We cook, therefore we are … smart.

Of course, all this is sheer speculation, but if you have read thus far, you know that speculation hasn’t kept us from hypothesizing in other respects.

Pressing on, further speculation suggests that not all the impacts of primitive cooking would have been salubrious. In particular, Wrangham proposes that if cooking became a female specialty, it might have contributed to the social subordination of women, insofar as they were expected to stay home and prepare dinner while the men went off in search of cookables. Of course, it could also be argued that under this “Homo cooker” scenario, women—as guardians of the crucial kitchen—would have become more central and thus more important, rather than less. A bigger problem with this hypothesis, however, is that there is no evidence, as yet, that early
Homo erectus
actually used fire.

Wrangham’s cooking hypothesis does not exhaust the possible linkage between human intelligence and nutrition. A highly regarded quartet of anthropologists from the University of New Mexico has suggested a model that makes explicit use of some of the distinctive landmarks in human life history.
25
Making a long and convoluted story short, their argument is that the human life pattern, compared to other mammals in general and primates in particular, is distinctive in experiencing a very long life span, extended time of juvenile dependence, assistance in childrearing provided by older postreproductive individuals (many of them relatives), and male assistance in childrearing. Our species is also distinctive, of course, in being very intelligent, which leads the four anthropologists to suggest that these phenomena are connected.

As they see it, the crucial factor uniting the four key life history traits to high intelligence is a dietary shift toward “high-quality,
nutrient-dense, and difficult-to-acquire food resources.” The logic, in the scientists’ own words, is as follows:

First, high levels of knowledge, skill, coordination, and strength are required to exploit the suite of high-quality, difficult-to-acquire resources humans consume. The attainment of those abilities requires time and a significant commitment to development. This extended learning phase, during which productivity is low, is compensated for by higher productivity during the adult period and an intergenerational flow of food from old to young. Because productivity increases with age, the investment of time in acquiring skill and knowledge leads to selection for lowered mortality rates and greater longevity. The returns on investments in development occur at older ages. This, in turn, favors a longer juvenile period if there are important gains in productive ability with body size and growth ceases at sexual maturity.

 

Second, we believe that the feeding niche that involves specializing on large, valuable food packages promotes food sharing, provisioning of juveniles, and increased grouping, all of which act to lower mortality during the juvenile and early adult periods. Food sharing and provisioning assist recovery in times of illness and reduce risk by limiting juvenile time allocation to foraging. Grouping also lowers predation risks. These buffers against mortality also favor a longer juvenile period and higher investment in other mechanisms to increase the life span.

Thus, we propose that the long human life span co-evolved with lengthening of the juvenile period, increased brain capacities for information processing and storage, and intergenerational resource flows, all as a result of an important dietary shift. Humans are specialists in that they consume only the highest-quality plant and animal resources in their local ecology and rely on creative, skill-intensive techniques to exploit them. Yet the capacity to develop new techniques for extractive foraging and hunting allows them to exploit a wide variety of different foods and to colonize all of earth’s terrestrial and coastal ecosystems.

As complex and resourceful as it is, the human mind—whatever the selection forces that caused it to evolve—also possesses a stubborn fondness for simple, unitary explanations. And this, in turn, generates resistance to multifactorial models such as this one! But the fact that our minds often prefer simple interpretations (e.g., that the earth is flat, that the sun moves through the sky, etc.) doesn’t make them true. When it comes to its adaptive value,
moreover, the human brain evolved to promote its own reproductive success, and not with any mandate of understanding itself. Best to be patient, therefore. As with the other hypotheses we have considered thus far, the final word on cooking as well as the role of nutrition and life history parameters has not been spoken. Or written. And perhaps not even imagined.

Purloined Intelligence?
 

Finally, we need to confront the most obvious possible adaptive value of intelligence, one that might be called the Purloined Letter hypothesis, after the short story by Edgar Allan Poe, in which the police tore apart someone’s apartment, seeking in vain a letter that was “hidden in plain sight”—right there on the suspect’s desk, and thus so obvious that it was overlooked. The idea is simply that greater intelligence was selected for because it led to higher survival (and thus reproduction). Much cynical amusement is generated, at least in modern times, by the so-called “Darwin Awards,” in which people get themselves killed as a result of behaving stupidly. And indeed, it is at least a reasonable hypothesis that there is—or at least, was—a correlation at the other end of the population distribution as well: Smarter people may indeed have survived and reproduced more successfully than stupid ones, independent of their social or Machiavellian intelligence.
26

 

But wouldn’t such pressures apply to all living things? And if so, why don’t we observe highly intelligent earthworms, oysters, or even radishes? For one thing, intelligence is expensive; as we have already noted, it requires a complex and metabolically costly brain. And for another, it is at least possible that human beings are uniquely intelligent because we have placed ourselves on a path where high intelligence has become uniquely valuable, even necessary.

The evolution of technology, for example, while providing early
Homo sapiens
with distinct evolutionary advantages, may well have also subjected our ancestors to distinct risks: Cutting themselves with their ingenious tools, endangering themselves by their new-found ability to confront and (usually but not always) overcome other large animals, constructing structures that provide shelter
but also the risk of collapsing on their builders, and so forth. As Spider Man aficionados often point out, with great power comes great responsibility … and also an increased risk that some people—likely, the least intelligent—will screw up.

If so, then human beings might have created for themselves a kind of one-way ratchet, with the increasing complexity of their own innovations being not only a consequence of their intelligence but also amplifying the importance of such intelligence as a survival mechanism. The rapid spread of early human ancestors from our African birthplace throughout the world could also have enhanced the import of intelligence as a straightforward Darwinian benefit, especially as our great-, great-grandparents encountered a wide range of new and challenging environments, from desert and forest to mountain and oceanside. All the while, the Darwinian, reproductive impact of smart survivors versus stupid losers would have been enhanced by the fact that early mortality on the part of the latter would also have left their orphaned offspring with lower survival and reproductive prospects, a kind of evolutionary double jeopardy.

Consciousness
 

We have been concerned thus far with intelligence rather than with consciousness. The two are doubtless related, however, in that both rely upon complex neural architecture, such that it is easy to assume that they are inextricably connected. But it ain’t necessarily so. Computers, for example, are highly intelligent. They can defeat the best human grandmasters and perform very difficult calculations, but they don’t (yet?) show any signs of possessing an independent and potentially even rebellious self-awareness like HAL in Stanley Kubrick’s movie
2001: A Space Odyssey
. At the same time, it is an open question whether a creature—or a machine—can be conscious without being intelligent, although this seems probable, if only because most people would acknowledge that someone who is moderately or even severely retarded (and thus not very intelligent) can nonetheless be conscious.

 

Consciousness may well be a
sine qua non
, necessary but not sufficient for humanness, all of which leads inevitably to the question: Why has consciousness evolved?

First, let’s try to define it, or at least, to gesture in that direction. If nothing else we can refer once again to Supreme Court Justice Potter Stewart’s oft-repeated observation concerning pornography: We may not be able to define consciousness, but we know it when we experience it. Indeed, one of the persistent conundrums awaiting anyone who grapples with consciousness is that whereas we perceive ourselves to be conscious, we can never be certain that anyone else is.

Here, nonetheless, is what seems like a reasonable definition of consciousness: a particular example of awareness (whatever that is!), characterized by recursiveness in which individuals are not only aware but also aware that they are aware. By this conception, many animals are aware but not strictly conscious. My Boxer dog, for example, is exquisitely aware of and responsive to just about everything around him—more so, in many cases, than I. I
know
, however, that I am conscious because I am aware of my own internal mental state, sometimes even paradoxically aware of that about which I am
un
aware.

On the other hand, I have little real doubt that my dog
is
conscious, although I can’t prove it (ditto for my cats and horses). A more satisfying stance, therefore—empathically as well as ethically—is to give in to common sense and stipulate that different animal species possess differing degrees of consciousness. This may be more intellectually satisfying as well, since postulating a continuum of consciousness is consistent with this fundamental evolutionary insight: cross-species, organic continuity.

In any event, the “why” question is as follows: Why should we (or any conscious species) be able to think about our thinking, instead of just plain old thinking, full stop? Why need we know that we know, instead of just knowing? Isn’t it enough to feel, without also feeling good—or bad—about the fact that we are feeling? After all, there are downsides to consciousness. For Dostoyevsky’s Grand Inquisitor, consciousness and its requisite choices compose a vast source of human pain (one that he obviated by telling people how to think and what to believe). For Ernest Becker,

Other books

Secret Maneuvers by Jessie Lane
The Brewer of Preston by Andrea Camilleri
Frog Power by Beverly Lewis
A Flying Birthday Cake? by Louis Sachar
The Iron Hunt by Marjorie M. Liu
Creeps Suzette by Mary Daheim
My Present Age by Guy Vanderhaeghe
Riveted (Art of Eros #1) by Kenzie Macallan
Force of Nature by Box, C. J.