Incognito (22 page)

Read Incognito Online

Authors: David Eagleman

BOOK: Incognito
9.36Mb size Format: txt, pdf, ePub

I was struck with an idea. I wheeled Mrs. G. to a position just in front of the room’s only mirror and asked if she could see her own face. She said yes. I then asked her to close both her eyes. Again she closed one eye and not the other.

“Are
both
your eyes closed?”

“Yes.”

“Can you see yourself?”

“Yes.”

Gently I said, “Does it seem possible to see yourself in the mirror if both your eyes are closed?”

Pause.
No conclusion.

“Does it look to you like one eye is closed or that both are closed?”

Pause.
No conclusion.

She was not distressed by the questions; nor did they change her opinion. What would have been a checkmate in a normal brain proved to be a quickly forgotten game in hers.

Cases like Mrs. G.’s allow us to appreciate the amount of work that needs to happen behind the scenes for our
zombie systems to work together smoothly and come to an agreement. Keeping the union together and making a good narrative does not happen for free—the brain works around the clock to stitch together a pattern of logic to our daily lives: what just happened and what was my role in it? Fabrication of stories is one of the key businesses in which our brains engage. Brains do this with the single-minded goal of getting the multifaceted actions of the democracy to make sense. As the coin puts it,
E pluribus unum
: out of many, one.

*   *   *
 

Once you have learned how to ride a bicycle, the brain does not need to cook up a narrative about what your muscles are doing; instead, it doesn’t bother the conscious CEO at all. Because everything is predictable, no story is told; you are free to think of other issues as you pedal along. The brain’s storytelling powers kick into gear only when things are conflicting or difficult to understand, as for the split-brain patients or anosognosics like Justice Douglas.

In the mid-1990s my colleague
Read Montague and I ran an experiment to better understand how humans make simple choices. We asked participants to choose between two cards on a computer screen, one labeled A and the other labeled B. The participants had no way of knowing which was the better choice, so they picked arbitrarily at first. Their card choice gave them a reward somewhere between a penny and a dollar. Then the cards were reset and they were asked to choose again. Picking the same card produced a different reward this time. There seemed to be a pattern to it, but it was very difficult to detect. What the participants didn’t know was that the reward in each round was based on a formula that incorporated the history of their previous forty choices—far too difficult for the brain to detect and analyze.

The interesting part came when I interviewed the players afterward. I asked them what they’d done in the gambling game and why they’d done it. I was surprised to hear all types of baroque explanations, such as “The computer liked it when I switched back and forth” and “The computer was trying punish me, so I switched my game plan.” In reality, the players’ descriptions of their own strategies did not match what they had actually done, which turned out to be highly predictable.
41
Nor did their descriptions match the computer’s behavior, which was purely formulaic. Instead, their conscious minds, unable to assign the task to a well-oiled zombie system, desperately sought a narrative. The participants weren’t
lying
; they were giving the best explanation they could—just like the split-brain patients or the anosognosics.

Minds seek patterns. In a term introduced by science writer
Michael Shermer, they are driven toward “patternicity”—the attempt to find structure in meaningless data.
42
Evolution favors pattern seeking, because it allows the possibility of reducing mysteries to fast and efficient programs in the neural circuitry.

To demonstrate patternicity, researchers in Canada showed subjects a light that flashed on and off randomly and asked them to choose which of two buttons to press, and when, in order to make the blinking more regular. The subjects tried out different patterns of button pressing, and eventually the light began to blink regularly. They had succeeded! Now the researchers asked them how they’d done it. The subjects overlaid a narrative interpretation about what they’d done, but the fact is that their button pressing was wholly unrelated to the behavior of the light: the blinking would have drifted toward regularity irrespective of what they were doing.

For another example of storytelling in the face of confusing data, consider dreams, which appear to be an interpretative overlay to nighttime storms of electrical activity in the brain. A popular model in the neuroscience literature suggests that dream plots are stitched together from essentially random activity: discharges of neural populations in the midbrain. These signals tickle into existence the simulation of a scene in a shopping mall, or a glimpse of recognition of a loved one, or a feeling of falling, or a sense of epiphany. All these moments are dynamically woven into a story, and this is why after a night of random activity you wake up, roll over to your partner, and feel as though you have a bizarre plot to relate. Ever since I was a child, I have been consistently amazed at how characters in my dreams possess such specific and peculiar details, how they come up with such rapid answers to my questions, how they produce such surprising dialogue and such inventive suggestions—all manner of things I would not have invented “myself.” Many times I’ve heard a new joke in a dream, and this impressed me greatly. Not because the joke was so funny in the sober light of day (it wasn’t) but because the joke was not one I could believe that
I
would have thought of. But, at least presumably, it was my
brain and no one else’s cooking up these interesting plotlines.
43
Like the split-brain patients or Justice Douglas, dreams illustrate our skills at spinning a single narrative from a collection of random threads. Your brain is remarkably good at maintaining the glue of the union, even in the face of thoroughly inconsistent data.

WHY DO WE HAVE CONSCIOUSNESS AT ALL?
 

Most neuroscientists study animal models of behavior: how a sea slug withdraws from a touch, how a mouse responds to rewards, how an owl localizes sounds in the dark. As these circuits are scientifically brought to light, they all reveal themselves to be nothing but
zombie systems: blueprints of circuitry that respond to particular inputs with appropriate outputs. If our brains were composed
only
of these patterns of circuits, why would it feel like anything to be alive and conscious? Why wouldn’t it feel like nothing—like a zombie?

A decade ago, neuroscientists
Francis Crick and
Christof Koch asked, “Why does not our brain consist simply of a series of specialized zombie systems?”
44
In other words, why are we conscious of anything at all? Why aren’t we simply a vast collection of these automated, burned-down routines that solve problems?

Crick and Koch’s answer, like mine in the previous chapters, is that consciousness exists to control—and to distribute control over—the automated alien systems. A system of automated subroutines that reaches a certain level of complexity (and human brains certainly qualify) requires a high-level mechanism to allow the parts to communicate, dispense resources, and allocate control. As we saw earlier with the tennis player trying to learn how to serve, consciousness is the CEO of the company: he sets the higher-level directions and assigns new tasks. We have learned in this chapter that he doesn’t need to understand the software that each department in the organization uses; nor does he need to see their
detailed logbooks and sales receipts. He merely needs to know whom to call on when.

As long as the zombie subroutines are running smoothly, the CEO can sleep. It is only when something goes wrong (say, all the departments suddenly find that their business models have catastrophically failed) that the CEO is rung up. Think about
when
your conscious awareness comes online: in those situations where events in the world
violate your
expectations
. When everything is going according to the needs and skills of your zombie systems, you are not consciously aware of most of what’s in front of you; when suddenly they cannot handle the task, you become consciously aware of the problem. The CEO scrambles around, looking for fast solutions, dialing up everyone to find who can address the problem best.

The scientist
Jeff Hawkins offers a nice example of this: after he entered his home one day, he realized that he had experienced no conscious awareness of reaching for, grasping, and turning the doorknob. It was a completely robotic, unconscious action on his part—and this was because everything about the experience (the doorknob’s feel and location, the door’s size and weight, and so on) was already burned down into unconscious circuitry in his brain. It was expected, and therefore required no conscious participation. But he realized that if someone were to sneak over to his house, drill the doorknob out, and replace it three inches to the right, he would notice immediately. Instead of his zombie systems getting him directly into his house with no alerts or concerns, suddenly there would be a violation of expectations—and consciousness would come online. The CEO would rouse, turn on the alarms, and try to figure out what might have happened and what should be done next.

If you think you’re consciously aware of most of what surrounds you, think again. The first time you make the drive to your new workplace, you attend to everything along the way. The drive seems to take a long time. By the time you’ve made the drive many times, you can get yourself there without much
in the way of conscious deliberation. You are now free to think about other things; you feel as though you’ve left home and arrived at work in the blink of an eye. Your zombie systems are experts at taking care of business as usual. It is only when you see a squirrel in the road, or a missing stop sign, or an overturned vehicle on the shoulder that you become consciously aware of your surroundings.

All of this is consistent with a finding we learned two chapters ago: when people play a new video game for the first time, their brains are alive with activity. They are burning energy like crazy. As they get better at the game, less and less brain activity is involved. They have become more energy efficient. If you measure someone’s brain and see very little activity during a task, it does not necessarily indicate that they’re not trying—it more likely signifies that they have worked hard in the past to burn the programs into the circuitry. Consciousness is called in during the first phase of learning and is excluded from the game playing after it is deep in the system. Playing a simple video game becomes as unconscious a process as driving a car, producing speech, or performing the complex finger movements required for tying a shoelace. These become hidden subroutines, written in an undeciphered programming language of proteins and neurochemicals, and there they lurk—for decades sometimes—until they are next called upon.

From an evolutionary point of view, the purpose of consciousness seems to be this: an animal composed of a giant collection of zombie systems would be energy efficient but
cognitively in
flexible
. It would have economical programs for doing particular, simple tasks, but it wouldn’t have rapid ways of switching between programs or setting goals to become expert in novel and unexpected tasks. In the animal kingdom, most animals do certain things very well (say, prying seeds from the inside of a pine cone), while only a few species (such as humans) have the flexibility to dynamically develop new software.

Although the ability to be flexible sounds better, it does not
come for free—the trade-off is a burden of lengthy childrearing. To be flexible like an adult human requires years of helplessness as an infant. Human mothers typically bear only one child at a time and have to provide a period of care that is unheard-of (and impracticable) in the rest of the animal kingdom. In contrast, animals that run only a few very simple subroutines (such as “Eat foodlike things and shrink away from looming objects”) adopt a different rearing strategy, usually something like “Lay lots of eggs and hope for the best.” Without the ability to write new programs, their only available mantra is: If you can’t outthink your opponents, outnumber them.

So are other animals conscious? Science currently has no meaningful way to make a measurement to answer that question—but I offer two intuitions. First, consciousness is probably not an all-or-nothing quality, but comes in degrees. Second, I suggest that an animal’s
degree of consciousness
will parallel its intellectual flexibility. The more subroutines an animal possesses, the more it will require a CEO to lead the organization. The CEO keeps the subroutines unified; it is the warden of the zombies. To put this another way, a small corporation does not require a CEO who earns three million dollars a year, but a large corporation does. The only difference is the number of workers the CEO has to keep track of, allocate among, and set goals for.
**

If you put a red egg in the nest of a herring gull, it goes berserk. The color red triggers aggression in the bird, while the shape of the egg triggers brooding behavior—as a result, it tries to simultaneously attack the egg and incubate it.
45
It’s running two programs at once, with an unproductive end result. The red egg sets off sovereign and conflicting programs, wired into the gull’s brain like competing fiefdoms. The rivalry is there, but the bird
has no capacity to arbitrate in the service of smooth cooperation. Similarly, if a female stickleback trespasses onto a male’s territory, the male will display attack behavior and courtship behavior simultaneously, which is no way to win over a lady. The poor male stickleback appears to be simply a bundled collection of zombie programs triggered by simple lock-and-key inputs (
Trespass! Female!
), and the subroutines have not found any method of arbitration between them. This seems to me to suggest that the herring gull and the stickleback are not particularly conscious.

Other books

Emma Chase by Khan, Jen
Absolute Rage by Robert K. Tanenbaum
The Last Dance by Scott,Kierney
A Foolish Consistency by Tim Tracer
Home by J.A. Huss