Read Darwin's Dangerous Idea Online

Authors: Daniel C. Dennett

Darwin's Dangerous Idea (81 page)

BOOK: Darwin's Dangerous Idea
12.06Mb size Format: txt, pdf, ePub
ads

whenever you can on economies: give your robot no more discriminatory Here there are two basic strategies you might follow. On one, you should prowess than it will probably need in order to distinguish whatever needs distinguishing in the world—given its particular constitution.

9. An earlier version of this thought experiment appeared in Dennett 1987b, ch. 8.

424 THE EVOLUTION OF MEANINGS

Safe Passage to the Future
425

Your task would be made much more difficult if you couldn't count on best interests. From your selfish point of view, that is what you hope, but this your robot's being the only such robot around with such a mission. Let us robot's projects are out of your direct control until you are awakened. It will suppose that, in addition to whatever people and other animals are up and have some internal representation of its currently highest goals, its
summum
about during the centuries to come, there will be other robots, many
dif-ferent
bonum,
but if it has fallen among persuasive companions of the sort we have robots (and perhaps "plants" as well), competing with your robot for energy imagined, the iron grip of the engineering that initially designed it will be and safety. (Why might such a fad catch on? Let's suppose we get irrefutable jeopardized. It will still be an artifact, still acting only as its engineering advance evidence that travelers from another galaxy will arrive on our planet permits it to act, but following a set of
desiderata
partly of its own devising.

in 2401. I for one would ache to be around to meet them, and if cold storage Still, according to the assumption we decided to explore, this robot will was my only prospect, I'd be tempted to go for it.) If you have to plan for not exhibit anything but
derived
intentionality, since it is just an artifact, dealing with other robotic agents, acting on behalf of other clients like created to serve your interests. We might call this position "client centrism"

yourself, you would be wise to design your robot with enough sophistication with regard to the robot: / am the original source of all the derived meaning in its control system to permit it to calculate the likely benefits and risks of within my robot, however far afield it drifts. It is just a survival machine cooperating with other robots, or of forming alliances for mutual benefit. You designed to carry me safely into the future. The fact that it is now engaged would be most unwise to suppose that other clients will be enamored of the strenuously in projects that are only remotely connected with my interests, rule of "live and let live"—there may well be inexpensive "parasite" robots and even antithetical to them, does not, according to our assumption, endow out there, for instance, just waiting to pounce on your expensive contraption any of its control states, or its "sensory" or "perceptual" states, with genuine and exploit it. Any calculations your robot makes about these threats and intentionality. If you still want to insist on this client centrism, then you opportunities would have to be "quick and dirty"; there is no foolproof way should be ready to draw the further conclusion that you yourself never enjoy of telling friends from foes, or traitors from promise-keepers, so you will any states with
original
intentionality, since you are just a survival machine have to design your robot to be, like a chess-player, a decision-maker who designed, originally, for the purpose of preserving your genes until they can takes risks in order to respond to time pressure.

replicate. Our intentionality is derived, after all, from the intentionality of The result of this design project would be a robot capable of exhibiting our selfish genes.
They
are the Unmeant Meaners, not us!

self-control of a high order. Since you must cede fine-grained real-time If this position does not appeal to you, consider jumping the other way.

control to it once you put yourself to sleep, you will be as "remote" as the Acknowledge that a fancy-enough artifact—something along the lines of engineers in Houston were when they gave the Viking spacecraft its auton-these imagined robots—
can
exhibit real intentionality, given its rich func-omy (see chapter 12). As an autonomous agent, it will be capable of deriving tional embedding in the environment and its prowess at self-protection and
its own
subsidiary goals from its assessment of its current state and the self-control.10 It, like you, owes its very existence to a project the goal of import of that state for its ultimate goal (which is to preserve you till 2401).

These secondary goals, which will respond to circumstances you cannot predict in detail (if you could, you could hard-wire the best responses to them), may take the robot far afield on century-long projects, some of which 10. In the light of this thought experiment, consider an issue raised by Fred Dretske may well be ill-advised, in spite of your best efforts. Your robot may embark (personal communication) with admirable crispness: "I think we could (logically ) create an artifact that
acquired
original intentionality, but not one that (at the moment of on actions antithetical to your purposes, even suicidal, having been creation, as it were)
had
it." How much commerce with the world would be enough to convinced by another robot, perhaps, to subordinate its own life mission to turn the dross of derived intentionality into the gold of original intentionality? This is our some other.

old problem of essentialism, in a new guise. It echoes the desire to zoom in on a crucial This robot we have imagined will be richly engaged in its world and its moment and thereby somehow identify a threshold that marks the first member of a projects, always driven ultimately by whatever remains of the goal states that species, or the birth of real function, or the origin of life, and as such it manifests a failure to accept the fundamental Darwinian idea that all such excellences emerge gradually by you set up for it at the time you entered the capsule. All the preferences it finite increments. Notice, too, that Dretske's doctrine is a peculiar brand of extreme will ever have will be the offspring of the preferences you initially endowed Spencerism: the
current
environment must do the shaping of the organism before the it with, in hopes that they would carry you into the twenty-fifth century, but shape "counts" as having real intentionality;
past
environments, filtered through the that is no guarantee that actions taken in the light of the robot's descendant wisdom of engineers or a history of natural selection, don't count—even if they result in preferences will continue to be responsive, directly, to your tne very same functional structures. There is something wrong and something right in this. More important than any particular past history of individual appropriate commerce 426 THE EVOLUTION OF MEANINGS

Safe Passage to the Future 427

which was to create a survival machine, but it, like you, has taken on a certain minndless, algorithmic R and D instead of a gift from on high. Jerry Fodor autonomy, has become a locus of self-control and self-determination, not by may joke about the preposterous idea of our being Mother Nature's artifacts, any miracle, but just by confronting problems during its own "lifetime" and but the laughter rings hollow; the only alternative views posit one skyhook or more or less solving them—problems of survival presented to it by the world.

another. The shock of this conclusion may be enough to make you more Simpler survival machines—plants, for instance—never achieve the heights sympathetic to Chomsky's or Searle's forlorn attempts to conceal the mind of self-redefinition made possible by the complexities of your robot; consid-behind impenetrable mystery, or Gould's forlorn attempts to escape from the ering
them just
as survival machines for their comatose inhabitants leaves no implication that natural selection is all it takes—an algorithmic series of patterns in
their
behavior unexplained.

cranes cranking out ever higher forms of design.

If you pursue this avenue, which of course I recommend, then you must Or it may inspire you to look elsewhere for a savior. Didn't the mathe-abandon Searle's and Fodor's "principled" objection to "strong AI." The matician Kurt Godel prove a great theorem that demonstrated the impos-imagined robot, however difficult or unlikely an engineering feat, is not an sibility of AI? Many have thought so, and recently their hunch was given a impossibility—nor do they claim it to be. They concede the possibility of powerful boost by one of the world's most eminent physicists and mathe-such a robot, but just dispute its "metaphysical status"; however adroitly it maticians, Roger Penrose, in his book
The Emperor's New Mind: Concern-managed its affairs, they say, its intentionality would not be the real thing.

ing Computers, Minds, and the Laws of Physics
( 1989), to which the next That's cutting it mighty fine. I recommend abandoning such a forlorn dis-chapter is devoted.

claimer and acknowledging that the meaning such a robot would discover in its world, and exploit in its own communications with others, would be exactly as real as the meaning you enjoy. Then your selfish genes can be CHAPTER 14:
Real meaning, the sort of meaning our words and ideas have,
seen to be the original
source
of your intentionality—and hence of every
is itself an emergent product of originally meaningless processes

the al-meaning you can ever contemplate or conjure up—even though you can then
gorithmic processes that have created the entire biosphere, ourselves in-transcend your genes, using your experience, and in particular the culture
cluded. A robot designed as a survival machine for you would, like you, owe
you imbibe, to build an almost entirely independent ( or "transcendent" )
its existence to a project ofR and D with other ulterior ends, but this would
locus of meaning on the base your genes have provided.

not prevent it from being an autonomous creator of meanings, in the fullest
I find this an entirely congenial—indeed, inspiring—resolution of the
sense.

tension between the fact that I, as a person, consider myself to be a source of meaning, an arbiter of what matters and why, and the fact that at the same CHAPTER 15:
One more influential source of skepticism about AI (and
time I am a member of the species
Homo sapiens,
a product of several billion
Darwin's dangerous idea) must be considered and neutralized: the persis-years of nonmiraculous R and D, enjoying no feature that didn't spring from
tently popular idea to the effect that Godel's Theorem proves that AI is
the same set of processes one way or another. I know that others find this
impossible. Roger Penrose has recently revived this meme, which thrives in
vision so shocking that they turn with renewed eagerness to the conviction
darkness, and his exposition of it is so clear mat it amounts to exposure. We
that somewhere, somehow, there just
has
to be a blockade against Darwinism
can exapt his artifact to our own purposes: with his unintended help, this
and AI. I have tried to show that Darwin's dangerous idea carries the
meme can be extinguished.

implication that there is no such blockade. It follows from the truth of Darwinism that you and I are Mother Nature's artifacts, but our intentionality is none the less real for being an effect of millions of years of with the real world is the disposition to engage in supple
future
interactions, appropriately responsive to whatever novelty the world imposes. But—and this is the solid ground, I think, for Dretske's intuition—since this capacity for swift redesign is apt to show itself in current or recent patterns of interaction, his insistence that an artifact exhibit "do-it-yourself understanding" ( Dennett 1992 ) is plausible, so long as we jettison the essentialism and treat it simply as an important symptom of intentionality worthy of the name.

The Sword in the Stone
429

doubts its soundness. The controversy all lies in how to
harness
the theorem to prove anything about the nature of the mind. The weakness in any such argument must come at the crucial empirical step: the step where we
look to
CHAPTER FIFTEEN

see
our heroes (ourselves, our mathematicians) doing the thing that the robot
The Emperor's New Mind,

simply cannot do. Is the feat in question like pulling the sword from the stone, a feat that has no plausible lookalikes, or is it a feat that cannot readily (if at all) be distinguished from mere approximations of the feat? That is the
and Other Fables

crucial question, and there has been a lot of confusion about just what the distinguishing feat is. Some of the confusion can be blamed on Kurt Godel himself, for he thought that he had proved that the human mind
must
be a skyhook.

In 1931 Gödel, a young mathematician at the University of Vienna, published his proof, one of the most important and surprising mathematical results of the twentieth century, establishing an absolute limit on mathematical proof that is really quite shocking. Recall the Euclidean geometry you studied in high school, in which you learned to create formal proofs of theorems of geometry, from a basic list of axioms and definitions, using a 1. THE SWORD IN THE STONE

fixed list of inference rules. You were learning your way around in an
axiomatization
of plane geometry. Remember how the teacher would draw a
In other words then, if a machine is expected to be infallible, it cannot
geometric diagram on the blackboard, showing a triangle, say, with various
also be intelligent. There are several theorems which say almost exactly
straight lines intersecting its sides in various ways, meeting at various angles,
that. But these theorems say nothing about how much intelligence may
and then ask you such questions as: "Do these two lines have to intersect at a
be displayed if a machine makes no pretence at infallibility.

BOOK: Darwin's Dangerous Idea
12.06Mb size Format: txt, pdf, ePub
ads

Other books

Diana by Laura Marie Henion
The Missing Mitt by Franklin W. Dixon
The Scorpion’s Bite by Aileen G. Baron
Silent Stalker by C. E. Lawrence
Complicit by Stephanie Kuehn
Skyhook by John J. Nance
Yield by Cyndi Goodgame