The Happiness of Pursuit: What Neuroscience Can Teach Us About the Good Life (6 page)

BOOK: The Happiness of Pursuit: What Neuroscience Can Teach Us About the Good Life
4.14Mb size Format: txt, pdf, ePub
In being dynamical, a brain does not differ from any other physical-law-abiding system, such as the pebble-based missile control computer that I described earlier in this chapter. (I consider this good news: the more humdrum the emerging explanation of the mind is, the easier it is to relate to.) Even so, the dynamics of a brain, especially a large one such as yours or mine, is much, much more involved—because neurons, unlike pebbles or anvils, are a piece of work.
First, neural representations are
active
. A pebble resting on the ground after a fall can represent a fallen anvil (boring); if you need it to represent a
falling
anvil, you’ll have to pick it up first. In comparison, neurons pick themselves up every time. A neuron that just finished representing a bit of outside information, which it does by sending a voltage spike down its output fiber or axon, becomes ready to do so all over again under its own power after just a few thousandths of a second. All it asks for in return is some glucose and some oxygen.
Second, neurons
network
. Axons make connections to other neurons at junctions that are called
synapses
. Each of these acts like a throttle that controls the strength of the tiny kick of electrical current imparted to the target neuron by an incoming spike. Each of the brain’s billions of neurons connects to others, whose numbers may run into the tens of thousands. Large-scale networking is no longer the exclusive province of nervous systems that it was before the Internet became a household word, and yet there is no artificial network out there that packs so much knottiness into such a small volume as the human brain.
Third, neural activity is
patterned
. The cumulative effect of many incoming kicks may push the target neuron over the brink, causing it to fire. Because their individual contributions are typically very small, spikes from many source neurons must converge on the target neuron within a short time of each other for it to fire. Moreover, because each spike’s kick is regulated by the efficacy or “weight” of the synapse through which it is delivered, only certain specific coalitions of other neurons can set off a given neuron. Thus, neural activity is highly structured both in time and in space.
Fourth, neurons
learn from experience
. Much of this learning takes the form of activity-dependent modification of the hundreds of billions of synapses connecting neurons to each other. In this type of learning, a synapse is made a bit stronger every time it delivers a kick just before its target neuron fires and a bit weaker every time the kick arrives slightly too late to make a difference. Mathematical analysis and empirical investigations show that this simple rule for synaptic modification can cause ensembles of neurons to self-organize into performing certain kinds of statistical inference on their inputs, thereby learning representations that support Bayesian decision making.
To see how statistical computation can be carried out by brains, think of a neuron in the brain of a mole rat that comes to represent the presence of a ditch within her tactile sensory range. If this neuron’s axon connects to another neuron, which has learned to represent the sensory quality of the echo that the mole-rat experienced shortly beforehand, then the weight of the synapse between them can be seen to represent the conditional probability of the echo, given the presence of the ditch—one of the quantities that the Bayesian brain needs to exercise foresight.
It turns out that a network of neurons is a natural biological medium for representing a network of causal relationships. Within this medium, the activities of neurons stand for objects or events, and their connections represent the patterns of causation: conditional probabilities, context, exceptions, and so on. Being inherently numerical, the currency that neurons trade with one another—the numbers of spikes they emit and their timing—is the most versatile kind of physical symbol. By being able to learn and use numerical representations, neurons leave pebbles and anvils (considered as physical symbols) in the dust. Of course, any symbol, numerical or not, can stand for anything at all. However, numerical symbols are absolutely required if the representational system needs to deal with
quantities
that must be mathematically relatable to each other, as in the probabilistic computation of causal knowledge.
Thus, not only are networks of neurons exquisitely suitable for representing the world and making statistically grounded foresight possible, but they can learn to do so on their own, as synapses change in response to the relative strength and timing of the activities of the neurons they connect. Seeing neural computation in this light goes a long way toward demystifying the role of the brain in making its owner be mindful of the world at large. The collective doings of the brain’s multitudes of neurons may be mind-boggling to contemplate, but that’s only because explanatory value—that is, conceptual simplicity—is found in the principles, not the details, of what the brain does.
Minds Without Brains
 
One of my favorite concise descriptions of the nature of the human mind comes from mathematician and computer scientist Marvin Minsky, who once observed that the mind is what the brain does. Having gotten a glimpse of the principles of what the mind is (a bundle of computations in the service of forethought) and of what the brain does (carrying out those computations), we can appreciate Minsky’s quip, but also discern that it is open to a very intriguing interpretation. The point of it is this: if what the brain does can be done by other means, then a mind can arise without the need for a brain.
To make peace with this outrageous yet true proposition, we need to focus on a key characteristic of computation: identical computations can be carried out by radically different physical means. This characteristic figured already in the very first example that I used to introduce the concept of computation in this book—that of the pebble and the anvil. Each of these two objects computes, by analog means, virtually the same thing (a particular kind of trajectory that must be followed while falling down to earth), which is why a pebble can represent the anvil (and vice versa).
Intuition suggests that this very same analog computation can be carried out by throwing any of a number of other objects—such as penguins (as I noted earlier in this chapter) or, to pick a less trite example, marmots. It appears that we may change many of the object’s properties without altering at all the computation that it carries out by undergoing the process of falling. For instance, we may vary freely the ferocity of the thrown animal: falling ferrets and rabbits would do equally well in computing the trajectory of a falling anvil. All this is so because pebbles, anvils, penguins, marmots, ferrets, and rabbits share the one physical feature that is absolutely required for the analog computation in question, namely, a high ratio of mass to air resistance.
What special physical features are required for computing a mind? None at all! That networks of neurons and learning synapses are exceedingly good at their job does not preclude mind-like computation from being enacted by some other kind of contraption. Remember that the brain’s neurons compute the mind by multiplying, adding, and passing around
numbers
that stand for various aspects of the world and of the brain’s own internal states, and numbers don’t care what they are made of. For a sorting machine whose function is to count roughly fist-sized fruit, nine apples means exactly the same thing as nine oranges. More generally, thoroughly different machines can be made to compute exactly the same thing: an 1884 vintage mechanical cash register is just as good at doing sums as an application that emulates it, complete with the clanking and the concluding bell chime, on an early-twenty-first-century handheld computing device.
This implies that any machine that can carry out a particular brain’s number game—as it unfolds over time and down to the last bit—would give rise to precisely the same mind that the brain does.
11
This equality has one consequence that only the most bigoted computing machine would fail to appreciate: in the society of minds, it does not matter what you compute yourself with or whether your grandmother had gears, vacuum tubes, or actual blood-soaked gray goo in her brainpan. On top of that, the mind is a moveable feast: although the original home of the human mind is in the human brain, it can flourish in any other medium that does faithfully and well enough what the brain does so well.
 
SYNOPSIS
 
The familiar “computer metaphor” that halfheartedly likens the brain to a computer must be discarded: it is unnecessary and in fact inappropriate, because the mind is computational in a literal sense. It is easy to explain the concept of computation in plain terms: it turns out that every physical process computes something. Which physical processes are cognitive processes? Those that operate on representations—internal stand-ins for objects and events that are external to the system in question.
To establish the relevance of computation to cognition, we need to consider examples of perceptual, motor, and other tasks that can only be solved by crunching numbers, some of which stand for various entities external to the brain and others for its internal states. It turns out that all of the mind’s tasks are like that. More generally, minds evolved to support foresight, which brains compute by learning and using the statistics of the world in which they live.
Thus, the mind is best defined as the bundle of computations carried out collectively by the brain’s neurons. Because the same computation can be implemented by different physical means, nonbiological minds are revealed as a distinct possibility. This line of reasoning, supported by hard evidence from cognitive sciences, exposes the mind-body problem as an artifact of old ways of conceptualizing cognition that can be safely dismissed.
 
 
3
 
The Republic of Soul
 
A discourse on method. Faster than
a speeding marmot. A treatise of human
nature. Perception by numbers. Representation
space: the final frontier. Being in the world.
The instruments of change. The value of
everything. Things get interesting.
 
Between the motion
And the act
Falls the Shadow.
—T. S . ELIOT,
The Hollow Men
(1925)
 
 
. . . at this moment, I’d say, I am
a bringer of light; a man who stands in a doorway
flooded by sun;
I am a bird; someone who learns,
in shadow, the real shape of brightness.
—WILLIAM REICHARD,
“An Open Door” (
This Brightness,
2007)
 
A Discourse on Method
 
If this book is your first encounter with the idea that the mind is a bundle of computations, reading the previous chapter may have been something of a transformative experience for you. Francis Crick, best known for discovering together with James Watson the double-helix structure of the DNA molecule, gave the title
The Astonishing Hypothesis
to a book in which he equated the mind with “the behavior of a vast assembly of nerve cells.” How much more astonishing is the hypothesis that what truly matters about the behavior of nerve cells is what they compute!
Whether you have been transformed, astonished, or merely intrigued by this hypothesis (which complements the other one nicely),
1
let me tell you that the best stuff—seeing it grow, prosper, and bear yummily explanatory fruit from one page to the next—is yet to come. So is, I must disclose, some hard thinking, considering what we’re up against here. If the mind is a bundle of computations, then understanding how it works would seem to call for a feat of reverse software engineering—a field of endeavor whose defining observation is “Hell is someone else’s code.”
Fortunately, just as you did not need a degree in computer science to understand the fundamental nature of computation, you do not need one to work out quite an adequate explanation of how computations come together to form a mind. This is because the machinery of mind does not, in fact, use any kind of “software” for anyone to decipher—just as a falling pebble does not use software to plot its course. To start making sense of this machinery, we need to learn to think about the mind on a number of levels.
It is the need to do so that distinguishes mindful entities—those whose representations are used by the entities themselves to some purpose—from mindless ones. It does not take a lot of sophistication for a system to possess rudimentary purposive mindfulness. What are its telltale signs? Imagine yourself waking up in the morning, perhaps a little earlier than usual, to discover that your slippers are slowly but steadily trying to get out of your sight. Such a scene would definitely raise more than one question. Some of the more likely ones are “Why?” (or, if you are prone to taking things personally, “Why are they doing this to me?”) and “How?” A little reflection reveals that this last question is too general, which suggests multiple alternatives that are more specific, such as “How do they figure out where to go?” and “How do they actually move?” Now, if you have more than a passing curiosity about your world, you would not settle for having just one of these questions answered. They all address different aspects of the startling mindfulness exhibited by your slippers, and their respective answers tend to complement each other, resulting in a more complete understanding.

Other books

Male Me by Amarinda Jones
Deadworld by J. N. Duncan
Settlers' Creek by Nixon, Carl
Montana Bride by Joan Johnston
Arrow To The Heart (De Bron Saga) by Vickery, Katherine
Heartless by Sara Shepard