Gödel, Escher, Bach: An Eternal Golden Braid (67 page)

Read Gödel, Escher, Bach: An Eternal Golden Braid Online

Authors: Douglas R. Hofstadter

Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C

BOOK: Gödel, Escher, Bach: An Eternal Golden Braid
5.06Mb size Format: txt, pdf, ePub

Problems such as these give one pause in considering such statements as this one, made by Warren Weaver, one of the first advocates of translation by computer, in the late 1940's: "When I look at an article in Russian, I say, 'This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode."" Weaver's remark simply cannot be taken literally; it must rather be considered a provocative way of saying that there is an objectively describable meaning hidden in the symbols, or at least something pretty close to objective; therefore, there would be no reason to suppose a computer could not ferret it out, if sufficiently well programmed.

High-Level Comparisons between Programs

Weaver's statement is about translations between different natural languages. Let's consider now the problem of translating between two computer languages. For instance, suppose two people have written programs which run on different computers, and we want to know if the two programs carry out the same task. How can we find out? We must compare the programs. But on what level should this be done? Perhaps one program mer wrote in a machine language, the other in a compiler language. Are two such programs comparable? Certainly. But how to compare them? One way might be to compile the compiler language program, producing a program in the machine language of its home computer.

Now we have two machine language programs. But there is another problem: there are two computers, hence two different machine languages-and they may be extremely different. One machine may have sixteen-bit words; the other thirty-six-bit words. One machine may' have built-in stack-handling instructions (pushing and popping), while the other lacks them. The differences between the hardware of the two machines may make the two machine language programs seem incomparable-and yet we suspect they are performing the same task, and we would Iike to see that at a glance. We are obviously looking at the programs from much too close a distance.

What we need to do is to step back, away from machine language, towards a higher, more chunked view. From this vantage point, we hope we will be able to perceive chunks of program which make each program seem rationally planned out on a global, rather than a local, scale-that is, chunks which fit together in a way that allows one to perceive the goals of the programmer. Let us assume that both programs were originally written in high-level languages. Then some chunking has already been done for us. But we will run into other troubles. There is a proliferation of such languages: Fortran, Algol, LISP, APL, and many others. How can you compare a program written in APL with one written in Algol: Certainly not by matching them up line by line. You will again chunk these programs in your mind, looking for conceptual, functional units which correspond.

Thus, you are not comparing hardware, you are not comparing software-you are comparing "etherware"-the pure concepts which lie back of the software. There is some sort of abstract "conceptual skeleton" which must be lifted out of low levels before you can carry out a meaningful comparison of two programs in different computer languges, of two animals, or of two sentences in different natural languages.

Now this brings us back to an earlier question which we asked about computers and brains: How can we make sense of a low-level description of a computer or a brain?

Is there, in any reasonable sense, an objective way to pull a high-level description out of a low-level one, in such complicated systems? In the case of a computer, a full display of the contents of memory-a so-called memory dump-is easily available. Dumps were commonly printed out in the early days of computing, when something went wrong with a program. Then the programmer would have to go home and pore over the memory dump for hours, trying to understand what each minuscule piece of memory represented.

In essence, the programmer would be doing the opposite of what a compiler does: he would be translating from machine language into a higher-level language, a conceptual language. In the end, the programmer would understand the goals of the program and could describe it in high-level terms-for example, "This program translates novels front Russian to English", or "This program composes an eight-voice fugue based on any theme which is fed in".

High-Level Comparisons between Brains

Now our question must be investigated in the case of brains. In this case, we are asking,

"Are people's brains also capable of being 'read', on a high level? Is there some objective description of the content of a brain?" In the Ant Fugue, the Anteater claimed to be able to tell what Aunt Hillary was thinking about, by looking at the scurryings of her component ants. Could some superbeing-a Neuroneater, perhaps-conceivably look down on our neurons, chunk what it sees, and come up with an analysis of our thoughts?

Certainly the answer must be yes, since we are all quite able to describe, in chunked (i.e., non-neural) terms, the activity of our minds at any given time. This means that we have a mechanism which allows us to chunk our own brain state to some rough degree, and to give a functional description of it. To be more precise, we do not chunk all of the brain state-we only chunk those portions of it which are active. However, if someone asks us about a subject which is coded in a currently inactive area of our brain, we can almost instantly gain access to the appropriate dormant area and come up with a chunked description of it-that is, some belief on that subject. Note that we come back with absolutely zero information on the neural level of that part of the brain: our description is so chunked that we don't even have any idea what part of our brain it is a description of. This can be contrasted with the programmer whose chunked description comes from conscious analysis of every part of the memory dump.

Now if a person can provide a chunked description of any part of his own brain, why shouldn't an outsider too, given some nondestructive means of access to the same brain, not only be able to chunk limited portions of the brain, but actually to give a complete chunked description of it-in other words, a complete documentation of the beliefs of the person whose brain is accessible? It is obvious that such a description would have an astronomical size, but that is not of concern here. We are interested in the question of whether, in principle, there exists a well-defined, highlevel description of a brain, or whether, conversely, the neuron-level description-or something equally physiological and intuitively unenlightening-is the best description that in principle exists. Surely, to answer this question would be of the highest importance if we seek to know whether we can ever understand ourselves.

Potential Beliefs, Potential Symbols

It is my contention that a chunked description is possible, but when we get it, all will not suddenly be clear and light. The problem is that in order to pull a chunked description out of the brain state, we need a language to describe our findings. Now the most appropriate way to describe a brain, it would seem, would be to enumerate the kinds of thoughts it could entertain, and the kinds of thoughts it could not entertain-or, perhaps, to enumerate its beliefs and the things which it does not believe. If that is the kind of goal we will be striving for in a chunked description, then it is easy to see what kinds of troubles we will run up against.

Suppose you wanted to enumerate all possible voyages that could be taken in an ASU; there are infinitely many. How do you determine which ones are plausible, though?

Well, what does "plausible" mean? We will have precisely this kind of difficulty in trying to establish what a "possible pathway" from symbol to symbol in a brain is. We can imagine an upsidedown dog flying through the air with a cigar in its mouth-or a collision between two giant fried eggs on a freeway-or any number of other ridiculous images. The number of far-fetched pathways which can be followed in our brains is without bound, just as is the number of insane itineraries that could be planned on an ASU. But just what constitutes a "sane" itinerary, given an ASU? And just what constitutes a "reasonable"

thought, given a brain state? The brain state itself does not forbid anv pathway, because for any pathway there are always circumstances which could force the following of that pathway. The physical status of a brain, if read correctly, gives information telling not which pathways could be followed, but rather how much resistance would be offered along the way.

Now in an ASU, there are many trips which could be taken along two or more reasonable alternative routes. For example, the trip from San Francisco to New York could go along either a northern route or a southern route. Each of them is quite reasonable, but people tend to take them under different circumstances. Looking at a map at a given moment in time does not tell you anything about which route will be preferable at some remote time in the future-that depends on the external circumstances under which the trip is to be taken. Likewise, the "reading" of a brain state will reveal that several reasonable alternative pathways are often available, connecting a given set of symbols.

However, the trip among these symbols need not be imminent; it may be simply one of billions of "potential" trips, all of which figure in the readout of the brain state. From this follows an important conclusion: there is no information in the brain state itself which tells which route will be chosen. The external circumstances will play a large determining role in choosing the route.

What does this imply? It implies that thoughts which clash totally may be produced by a single brain, depending on the circumstances. And any high-level readout of the brain state which is worth its salt must contain all such conflicting versions.

Actually this is quite obvious-that we all are bundles of contradictions, and we manage to hang together by bringing out only one side of ourselves at a given time. The selection cannot be predicted in advance, because the conditions which will force the selection are not known in advance. What the brain state can provide, if properly read, is a conditional description of the selection of routes.

Consider, for instance, the Crab's plight, described in the
Prelude
. He can react in various ways to the playing of a piece of music. Sometimes he will be nearly immune to it, because he knows it so well. Other times, he will be quite excited by it, but this reaction requires the right kind of triggering from the outside-for instance, the presence of an enthusiastic listener, to

whom the work is new. Presumably, a high-level reading of the Crab's brain state would reveal the potential thrill (and conditions which would induce it), as well as the potential numbness (and conditions which would induce it). The brain state itself would not tell which one would occur on the next hearing of the piece, however: it could only say, "If such-&-such conditions obtain, then a thrill will result; otherwise ..."

Thus a chunked description of a brain state would give a catalogue of beliefs which could be evoked conditionally, dependent on circumstances. Since not all possible circumstances can be enumerated, one would have to settle for those which one thinks are

"reasonable". Furthermore, one would have to settle for a chunked description of the circumstances themselves, since they obviously cannot-and should not-be specified down to the atomic level! Therefore, one will not be able to make an exact, deterministic prediction saying which beliefs will be pulled out of the brain state by a given chunked circumstance. In summary, then, a chunked description of a brain state will consist of a probabilistic catalogue, in which are listed those beliefs which are most likely to be induced (and those symbols which are most likely to be activated) by various sets of

"reasonably likely" circumstances, themselves described on a chunked level. Trying to chunk someone's beliefs without referring to context is precisely as silly as trying to describe the range of a single person's "potential progeny" without referring to the mate.

The same sorts of problems arise in enumerating all the symbols in a given person's brain. There are potentially not only an infinite number of pathways in a brain, but also an infinite number of symbols. As was pointed out, new concepts can always be formed from old ones, and one could argue that the symbols which represent such new concepts are merely dormant symbols in each individual, waiting to be awakened. They may never get awakened in the person's lifetime, but it could be claimed that those symbols are nonetheless always there, just waiting for the right circumstances to trigger their synthesis. However, if the probability is very low, it would seem that "dormant"

would be a very unrealistic term to apply in the situation. To make this clear, try to imagine all the "dormant dreams" which are sitting there inside your skull while you're awake. Is it conceivable that there exists a decision procedure which could tell

"potentially dreamable themes" from "undreamable themes", given your brain State
Where Is the Sense of Self?

Looking back on what we have discussed, you might think to yourself, "These speculations about brain and mind are all well and good, but what about the feelings involved in consciousness, These symbols may trigger each other all they want, but unless someone perceives the whole thing, there's no consciousness."

Other books

Antebellum BK 1 by Jeffry S.Hepple
Love's Promise by Cheryl Holt
Murder of Halland by Pia Juul
The Texas Billionaire's Baby by Karen Rose Smith