Authors: David Eagleman
Inspired by this art of
consensus building,
Abraham Lincoln chose to place adversaries William Seward and
Salmon Chase in his presidential cabinet. He was choosing, in the memorable phrase
of historian
Doris Kearns Goodwin, a team of rivals. Rivalrous teams are central in modern political strategy. In February 2009, with Zimbabwe’s economy in free fall,
President Robert Mugabe agreed to share power with
Morgan Tsvangirai, a rival he’d earlier tried to assassinate. In March 2009, Chinese president
President Hu Jintao named two indignantly opposing faction leaders,
Xi Jinping and
Li Keqiang, to help craft China’s economic and political future.
I propose that the brain is best understood as a team of rivals, and the rest of this chapter will explore that framework: who the parties are, how they compete, how the union is held together, and what happens when things fall apart. As we proceed, remember that competing factions typically have the same goal—success for the country—but they often have different ways of going about it. As Lincoln put it, rivals should be turned into allies “for the sake of the greater good,” and for neural subpopulations the common interest is the thriving and survival of the organism. In the same way that liberals and conservatives both love their country but can have acrimoniously different strategies for steering it, so too does the brain have competing factions that all believe they know the right way to solve problems.
When trying to understand the strange details of human behavior, psychologists and economists sometimes appeal to a “dual-process” account.
9
In this view, the brain contains two separate systems: one is fast, automatic, and below the surface of conscious awareness, while the other is slow, cognitive, and conscious. The first system can be labeled automatic, implicit, heuristic, intuitive, holistic, reactive, and impulsive, while the second system is cognitive, systematic, explicit, analytic, rule-based, and reflective.
10
These two processes are always battling it out.
Despite the “dual-process” moniker, there is no real reason to assume that there are only two systems—in fact, there may be several systems. For example, in 1920
Sigmund Freud suggested three competing parts in his model of the psyche: the id (instinctive), the ego (realistic and organized), and the superego (critical and moralizing).
11
In the 1950s, the American neuroscientist
Paul MacLean suggested that the brain is made of three layers representing successive stages of evolutionary development: the
reptilian brain (involved in survival behaviors), the
limbic system (involved in emotions), and the
neocortex (used in higher-order thinking). The details of both of these theories have largely fallen out of favor among neuroanatomists, but the heart of the idea survives: brains are made of competing subsystems. We will proceed using the generalized dual-process model as a starting point, because it adequately conveys the thrust of the argument.
Although psychologists and economists think of the different systems in abstract terms, modern neuroscience strives for an anatomical grounding. And it happens that the wiring diagram of the brain lends itself to divisions that generally map onto the dual-process model.
12
Some areas of your brain are involved in higher-order operations regarding events in the outside world (these include, for example, the surface of the brain just inside your temples, called the dorsolateral prefrontal cortex). In contrast, other areas are involved with monitoring your internal state, such as your level of hunger, sense of motivation, or whether something is rewarding to you (these areas include, for example, a region just behind your forehead called the medial prefrontal cortex, and several areas deep below the surface of the cortex). The situation is more complicated than this rough division would imply, because brains can simulate future states, reminisce about the past, figure out where to find things not immediately present, and so on. But for the moment, this division into systems that monitor the external and internal will serve as a rough guide, and a little later we will refine the picture.
In the effort to use labels tied neither to black boxes nor to neuroanatomy, I’ve chosen two that will be familiar to everyone: the
rational
and
emotional
systems. These terms are underspecified and imperfect, but they will nonetheless carry the central point about rivalries in the brain.
13
The rational system is the one that cares about analysis of things in the outside world, while the emotional system monitors internal state and worries whether things will be good or bad. In other words, as a rough guide, rational cognition involves external events, while emotion involves your internal state. You can do a math problem without consulting your internal state, but you can’t order a dessert off a menu or prioritize what you feel like doing next.
14
The emotional networks are absolutely required to rank your possible next actions in the world: if you were an emotionless robot who rolled into a room, you might be able to make analyses about the objects around you, but you would be frozen with indecision about what to do next. Choices about the priority of actions are determined by our internal states: whether you head straight to the refrigerator, bathroom, or bedroom upon returning home depends not on the external stimuli in your home (those have not changed), but instead on your body’s internal states.
The battle between the rational and emotional systems is brought to light by what philosophers call the
trolley dilemma. Consider this scenario: A trolley is barreling down the train tracks, out of control. Five workers are making repairs way down the track, and you, a bystander, quickly realize that they will all be killed by the trolley. But you also notice that there is a switch nearby that you can throw, and that will divert the trolley down a different track, where only a single worker will be killed. What do you do? (Assume there are no trick solutions or hidden information.)
If you are like most people, you will have no hesitation about throwing the switch: it’s far better to have one person killed than five, right? Good choice.
Now here’s an interesting twist to the dilemma: imagine that the same trolley is barreling down the tracks, and the same five workers are in harm’s way—but this time you are a bystander on a footbridge that goes over the tracks. You notice that there is an obese man standing on the footbridge, and you realize that if you were to push him off the bridge, his bulk would be sufficient to stop the train and save the five workers. Do you push him off?
If you’re like most people, you bristle at this suggestion of murdering an innocent person. But wait a minute. What differentiates this from your previous choice? Aren’t you trading one life for five lives? Doesn’t the math work out the same way?
What exactly is the difference in these two cases? Philosophers working in the tradition of Immanuel Kant have proposed that the difference lies in how people are being used. In the first scenario, you are simply reducing a bad situation (the deaths of five people) to a less bad situation (the death of one). In the case of the man on the bridge, he is being exploited as a means to an end. This is a popular explanation in the philosophy literature. Interestingly, there may be a more brain-based approach to understand the reversal in people’s choices.
In the alternative interpretation, suggested by the neuroscientists
Joshua Greene and
Jonathan Cohen, the difference in the two scenarios pivots on the emotional component of actually touching someone—that is, interacting with him at a close distance.
15
If the problem is constructed so that the man on the footbridge can be dropped, with the flip of switch, through a trapdoor, many people will vote to let him drop. Something about interacting with the person up close stops most people from pushing the man to his death. Why? Because that sort of personal interaction activates the emotional networks. It changes the problem from an abstract, impersonal math problem into a personal, emotional decision.
When people consider the trolley problem, here’s what brain imaging reveals: In the footbridge scenario, areas involved in motor planning and emotion become active. In contrast, in the track-switch scenario, only lateral areas involved in rational thinking
become active. People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like
Star Trek
’s Mr. Spock.
The battle between emotional and rational networks in the brain is nicely illustrated by an old episode of
The Twilight Zone
. I am paraphrasing from memory, but the plot goes something like this: A stranger in an overcoat shows up at a man’s door and proposes a deal. “Here is a box with a single button on it. All you have to do is press the button and I will pay you a thousand dollars.”
“What happens when I press the button?” the man asks.
The stranger tells him, “When you press the button, someone far away, someone you don’t even know, will die.”
The man suffers over the moral dilemma through the night. The button box rests on his kitchen table. He stares at it. He paces around it. Sweat clings to his brow.
Finally, after an assessment of his desperate financial situation, he lunges to the box and punches the button. Nothing happens. It is quiet and anticlimactic.
Then there is a knock at the door. The stranger in the overcoat is there, and he hands the man the money and takes the box. “Wait,” the man shouts after him. “What happens now?”
The stranger says, “Now I take the box and give it to the next person. Someone far away, someone you don’t even know.”
The story highlights the ease of impersonally pressing a button: if the man had been asked to attack someone with his hands, he presumably would have declined the bargain.
In earlier times in our evolution, there was no real way to interact with others at a distance any farther than that allowed by hands, feet, or possibly a stick. That
distance of interaction was salient and consequential, and this is what our emotional reaction reflects. In modern times, the situation differs: generals and even soldiers commonly find themselves far removed from the people they kill.
In Shakespeare’s
Henry VI, Part 2
, the rebel Jack Cade challenges Lord Say, mocking the fact that he has never known the firsthand danger of the battlefield: “When struck’st thou one blow in the field?” Lord Say responds, “Great men have reaching hands: oft have I struck those that I never saw, and struck them dead.” In modern times, we can launch forty Tomahawk surface-to-surface missiles from the deck of navy ships in the Persian Gulf and Red Sea with the touch of a button. The result of pushing that button may be watched by the missile operators live on CNN, minutes later, when Baghdad’s buildings disappear in plumes. The proximity is lost, and so is the emotional influence. This impersonal nature of waging war makes it disconcertingly easy. In the 1960s, one political thinker suggested that the button to launch a nuclear war should be implanted in the chest of the President’s closest friend. That way, should the President want to make the decision to annihilate millions of people on the other side of the globe, he would first have to physically harm his friend, ripping open his chest to get to the button. That would at least engage his emotional system in the
decision making, so as to guard against letting the choice be impersonal.
Because both of the neural systems battle to control the single output channel of behavior, emotions can tip the balance of decision making. This ancient battle has turned into a directive of sorts for many people:
If it feels bad, it is probably wrong
.
16
There are many counter examples to this (for example, one may find oneself put off by another’s sexual preference but still deem nothing morally wrong with that choice), but emotion nonetheless serves as a generally useful steering mechanism for decision making.
The emotional systems are evolutionarily old, and therefore shared with many other species, while the development of the rational system is more recent. But as we have seen, the novelty of the rational system does not necessarily indicate that it is, by itself, superior. Societies would
not
be better off if everyone were like Mr. Spock, all rationality and no emotion. Instead, a balance—a teaming up of the internal rivals—is optimal for brains. This is
because the disgust we feel at pushing the man off the footbridge is critical to social interaction; the impassivity one feels at pressing a button to launch a Tomahawk missile is detrimental to civilization. Some balance of the emotional and rational systems is needed, and that balance may already be optimized by natural selection in human brains. To put it another way, a democracy split across the aisle may be just what you want—a takeover in either direction would almost certainly prove less optimal. The ancient Greeks had an analogy for life that captured this wisdom: you are a charioteer, and your chariot is pulled by two thunderous horses, the white horse of reason and the black horse of passion. The white horse is always trying to tug you off one side of the road, and the black horse tries to pull you off the other side. Your job is to hold on to them tightly, keeping them in check so you can continue down the middle of the road.
The emotional and rational networks battle not only over immediate moral decisions, but in another familiar situation as well: how we behave in time.
Some years ago, the psychologists
Daniel Kahneman and
Amos Tversky posed a deceptively simple question: If I were to offer you $100 right now or $110 a week from now, which would you choose? Most subjects chose to take $100 right then. It just didn’t seem worthwhile to wait an entire week for another $10.