Gödel, Escher, Bach: An Eternal Golden Braid (125 page)

Read Gödel, Escher, Bach: An Eternal Golden Braid Online

Authors: Douglas R. Hofstadter

Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C

BOOK: Gödel, Escher, Bach: An Eternal Golden Braid
13.32Mb size Format: txt, pdf, ePub

A machine is not a genie, it does not work by magic, it does not possess a will, and, Wiener to the contrary, nothing comes out which has not been put in, barring, of course, an infrequent case of malfunctioning... .

The "intentions" which the machine seems to manifest are the intentions of the human programmer, as specified in advance, or they are subsidiary intentions derived from these, following rules specified by the programmer. We can even anticipate higher levels of abstraction, just as Wiener does, in which the program will not only modify the subsidiary intentions but will also modify the rules which are used in their derivation, or in which it will modify the ways in which it modifies the rules, and so on, or even in which one machine will design and construct a second machine with enhanced capabilities. However, and this is important, the machine will not and cannot [italics are his do any of these things until it has been instructed as to how to proceed. There is and logically there must always remain a complete hiatus between (i) any ultimate extension and elaboration in this process of carrying out man's wishes and (ii) the development within the machine of a will of' its own. To believe otherwise is either to believe in magic or to believe that the existence of man's will is an illusion and that man's actions are as mechanical as the machine's. Perhaps Wiener's article and my rebuttal have both been mechanically determined, but this I refuse to believe.'

This reminds me of the Lewis Carroll Dialogue (the
Two-Part Invention
); I'll try to explain why. Samuel bases his argument against machine consciousness (or will) on the notion that any
mechanical instantiation of will would require an infinite regress
.

Similarly, Carroll's Tortoise argues that no step of reasoning, no matter how simple, can be done without invoking some rule on a higher level to justify the step in question. But that being

also a step of reasoning. one must resort to a yet higher-level rule, and so on. Conclusion:
Reasoning involves an infinite regress
.

Of course something is wrong with the Tortoise's argument, and I believe something analogous is wrong with Samuel's argument. To show how the fallacies are analogous, I now shall "help the Devil", by arguing momentarily as Devil's advocate.

(Since, as is well known, God helps those who help themselves, presumably the Devil helps all those, and only those, who don't help themselves. Does the Devil help himself?) Here are my devilish conclusions drawn from the Carroll Dialogue: The conclusion "reasoning is impossible" does not apply to people, because as is plain to anyone, we do manage to carry out many steps of reasoning, all the higher levels notwithstanding. That shows that we humans operate
without
need of rules
: we are "informal systems". On the other hand, as an argument against the possibility of any
mechanical
instantiation of reasoning, it is valid, for any mechanical reasoning-system would have to depend on rules explicitly, and so it couldn't get off the ground unless it had metarules telling it when to apply its rules, metametarules telling it when to apply its metarules, and so on. We may conclude that the ability to reason can never be mechanized. It is a uniquely human capability.

What is wrong with this Devil's advocate point of view? It is obviously the assumption that
a machine cannot do anything without having a rule telling it to do so
. In fact, machines get around the Tortoise's silly objections as easily as people do, and moreover for exactly the same reason: both machines and people are made of hardware which runs all by itself, according to the laws of physics. There is no need to rely on "rules that permit you to apply the rules", because the
lowest
-level rules-those without any "meta"'s in front-are embedded in the hardware, and they run without permission. Moral: The Carroll Dialogue doesn't say anything about the differences between people and machines, after all. (And indeed, reasoning is mechanizable.)

So much for the Carroll Dialogue. On to Samuel's argument. Samuel's point, if I may caricature it, is this:

No computer ever "wants" to do anything, because it was programmed by someone else. Only if it could program itself from zero on up-an absurdity-would it have its own sense of desire.

In his argument, Samuel reconstructs the Tortoise's position, replacing "to reason" by "to want". He implies that behind any mechanization of desire, there has to be either an infinite regress or worse, a closed loop. If this is why computers have no will of their own, what about people? The same criterion would imply that

Unless a person designed himself and chose his own wants (as well as choosing to choose his own wants, etc.), he cannot be said to have a will of his own.

It makes you pause to think where your sense of having a will comes from. Unless you are a soulist, you'll probably say that it comes from your brain-a piece of hardware which you did not design or choose. And yet that doesn't diminish your sense that you want certain things, and not others. You aren't a "self-programmed object" (whatever that would be), but you still do have a sense of desires, and it springs from the physical substrate of your mentality. Likewise, machines may someday have wills despite the fact that no magic program spontaneously appears in memory from out of nowhere (a "self-programmed program"). They will have wills for much the same reason as you do-by reason of organization and structure on many levels of hardware and software. Moral: The Samuel argument doesn't say anything about the differences between people and machines, after all. (And indeed, will will be mechanized.)

Below Every Tangled Hierarchy Lies An Inviolate Level

Right after the
Two-Part Invention
, I wrote that a central issue of this book would be:

"Do words and thoughts follow formal rules?" One major thrust of the book has been to point out the many-leveledness of the mind/brain, and I have tried to show why the ultimate answer to the question is, "Yes-provided that you go down to the lowest level-the hardware-to find the rules."

Now Samuel's statement brought up a concept which I want to pursue. It is this: When we humans think, we certainly do change our own mental rules, and we change the rules that change the rules, and on and on-but these are, so to speak, "software rules".

However, the rules at
bottom
do not change. Neurons run in the same simple way the whole time. You can't "think" your neurons into running some nonneural way, although you can make your mind change style or subject of thought. Like Achilles in the
Prelude
,
Ant Fugue
, you have access to your thoughts but not to your neurons. Software rules on various levels can change; hardware rules cannot-in fact, to their rigidity is due the software's flexibility! Not a paradox at all, but a fundamental, simple fact about the mechanisms of intelligence.

This distinction between self-modifiable software and inviolate hardware is what I wish to pursue in this final Chapter, developing it into a set of variations on a theme.

Some of the variations may seem to be quite far-fetched, but I hope that by the time I close the loop by returning to brains, minds, and the sensation of consciousness, you will have found an invariant core in all the variations.

My main aim in this Chapter is to communicate some of the images which help me to visualize how consciousness rises out of the jungle of neurons; to communicate a set of intangible intuitions, in the hope that

these intuitions are valuable and may perhaps help others a l4tle to come to clearer formulations of their own images of what makes minds run. I could not hope for more than that my own mind's blurry images of minds and images should catalyze the formation of sharper images of minds and images in other minds.

A Self-Modifying Game

A first variation, then, concerns games in which on your turn, you may modify the rules.

Think of chess. Clearly the rules stay the same, just the board position changes on each move. But let's invent a variation in which, on your turn, you can either make a move or change the rules. But how? At liberty? Can you turn it into checkers? Clearly such anarchy would be pointless. There must be some constraints. For instance, one version might allow you to redefine the knight's move. Instead of being 1-and-then-2, it could be m-and-then-n where
m
and
n
are arbitrary natural numbers; and on your turn you could change either
m
or
n
by plus or minus 1.-So it could go from 1-2 to 1-3 to 0-3 to 0-4 to 0-5 to 1-5 to 2-5 ... Then there could be rules about redefining the bishop's moves, and the other pieces' moves as well. There could be rules about adding new squares, or deleting old squares .. .

Now we have two layers of rules: those which tell how to move pieces, and those which tell how to change the rules. So we have rules and metarules. The next step is obvious: introduce metametarules by which we can change the metarules. It is not so obvious how to do this. The reason it is easy to formulate rules for moving pieces is that pieces move in a formalized space: the checkerboard. If you can devise a simple formal notation for expressing rules and metarules, then to manipulate them will be like manipulating strings formally, or even like manipulating chess pieces. To carry things to their logical extreme, you could even express rules and metarules as positions on auxiliary chess boards. Then an arbitrary chess position could be read as a game, or as a set of rules, or as a set of metarules, etc., depending on which interpretation you place on it. Of course, both players would have to agree on conventions for interpreting the notation.

Now we can have any number of adjacent chess boards: one for the game, one for rules, one for metarules, one for metametarules, and so on, as far as you care to carry it.

On your turn, you may make a move on any one of the chess boards except the top-level one, using the rules which apply (they come from the next chess board up in the hierarchy). Undoubtedly both players would get quite disoriented by the fact that almost anything-though not everything!-can change. By definition, the top-level chess board can't be changed, because you don't have rules telling how to change it. It is
inviolate
.

There is more that is inviolate: the conventions by which the different boards are interpreted, the agreement to take turns, the agreement that each person may change one chess board each turn-and you will find more if you examine the idea carefully.

Now it is possible to go considerably further in removing the pillars by which orientation is achieved. One step at a time. .. We begin by collapsing the whole array of boards into a single board. What is meant by this? There will be two ways of interpreting the board: (1) as pieces to be moved; (2) as rules for moving the pieces. On your turn, you move pieces-and perforce, you change rules! Thus, the rules constantly change themselves. Shades of Typogenetics-or for that matter, of real genetics. The distinction between game, rules, metarules, metametarules, has been lost. What was once a nice clean hierarchical setup has become a Strange Loop, Or Tangled Hierarchy. The moves change the rules, the rules determine the moves, round and round the mulberry bush ... There are still different levels, but the distinction between "lower" and "higher" has been wiped out.

Now, part of what was inviolate has been made changeable. But there is still plenty that is inviolate. Just as before, there are conventions between you and your opponent by which you interpret the board as a collection of rules. There is the agreement to take turns-and probably other implicit conventions, as well. Notice, therefore, that the notion of different levels has survived, in an unexpected way. There is an Inviolate level-let's call it the
I-level
-on which the interpretation conventions reside; there is also a Tangled level-the
T-level
-on which the Tangled Hierarchy resides. So these two levels are still hierarchical: the I-level governs what happens on the T-level, but the T-level does not and cannot affect the I-level. No matter that the T-level itself is a Tangled Hierarchy-it is still governed by a set of conventions outside of itself. And that is the important point.

As you have no doubt imagined, there is nothing to stop us from doing the

"impossible"-namely, tangling the I-level and the T-level by making the interpretation conventions themselves subject to revision, according to the position on the chess board.

But in order to carry out such a "supertangling", you'd have to agree with your opponent on some further conventions connecting the two levels-and the act of doing so would create a new level, a new sort of inviolate level on top of the "supertangled" level (or underneath it, if you prefer). And this could continue going on and on. In fact, thèjumps" which are being made are very similar to those charted in the
Birthday
Cantatatata
, and in the repeated Gödelization applied to various improvements on TNT.

Each time you think you have reached the end, there is some new variation on the theme of jumping out of the system which requires a kind of creativity to spot.

The Authorship Triangle Again

Other books

Frontier Inferno by Kate Richards
A Rake's Midnight Kiss by Anna Campbell
Night of Shadows by Marilyn Haddrill, Doris Holmes
See Jane Run by Hannah Jayne
Bat Summer by Sarah Withrow
Scorn by Parris, Matthew;
Ash & Bramble by Sarah Prineas