Gödel, Escher, Bach: An Eternal Golden Braid (108 page)

Read Gödel, Escher, Bach: An Eternal Golden Braid Online

Authors: Douglas R. Hofstadter

Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C

BOOK: Gödel, Escher, Bach: An Eternal Golden Braid
11Mb size Format: txt, pdf, ePub

Objection (9) I find remarkable. I have seen the Turing paper reprinted in a book-but with objection (9) omitted-which I find equally remarkable. My own point of view-contrary to Turing's-is that ESP does not exist. Turing was reluctant to accept the idea that ESP is real, but did so nonetheless, being compelled by his outstanding scientific integrity to accept the consequences of what he viewed as powerful statistical evidence in favor of ESP. I disagree, though I consider it an exceedingly complex and fascinating question.

"Parry Encounters the Doctor"

In the Dialogue preceding this Chapter, you have seen an authentic exchange between a computer program and a human. Now there have been many programs with less ability to "converse" than SHRDLU. Two rather famous ones are "Doctor", created by Joseph Weizenbaum, and "Parry", created by Kenneth Colby. The former is supposed to simulate a psychiatrist using "nondirective therapy", the latter to simulate the belief structure of a paranoid. An amusing head-on collision between the two programs was arranged over a computer network, and the following is a short excerpt from the full transcript:9

Parry: I don't understand your motives.

Doctor: Why don't you understand my motives?

Parry: Shouldn't I be?

Doctor: Why do you say that just now?

Parry: That's just the way it is.

Doctor: Suppose that's just the way it were not.

Parry: So?

Doctor: I am not sure I understand you fully.

Parry: You harp on the same questions too much.

Doctor: What does that similarity suggest to you?

Parry: I don't get you.

Doctor: Do you wish that you did get I?

Parry: It's about time you showed some interest in my feelings.

Doctor: Oh? I showed some interest in your feelings?

These programs, of course, were made to talk to humans, so they are shown up rather poorly here, pitted against each other. Mostly they rely on shrewd guesses as to the nature of the input (which they analyze quite shallowly) and spit back canned answers that have been carefully selected from a large repertoire. The answer may be only partially canned: for example, a template with blanks that can be filled in. It is assumed that their

human partners will read much more into what they say than is actually underlying it.

And in fact, according to Weizenbaum, in his book Computer Power and Human Reason, just that happens. He writes:

ELIZA [the program from which Doctor was made created the most remarkable illusion of having understood in the minds of the many people who conversed with it.... They would often demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.10

Given the above excerpt, you may find this incredible. Incredible, but true.

Weizenbaum has an explanation:

Most men don't understand computers to even the slightest degree. So, unless they are capable of very great skepticism (the kind we bring to bear while watching a stage magician), they can explain the computer's intellectual feats only by bringing to hear the single analogy available to them, that is, their model of their own capacity to think. No wonder, then, that they overshoot the mark: it is truly impossible to imagine a human who could imitate ELIZA, for example, but for whom ELIZA's language abilities were his limit."

Which amounts to an admission that this kind of program is based on a shrewd mixture of bravado and bluffing, taking advantage of people's gullibility.

In light of this weird "ELIZA-effect", some people have suggested that the Turing test needs revision, since people can apparently be fooled by simplistic gimmickry. It has been suggested that the interrogator should be a Nobel Prize-winning scientist. It might be more advisable to turn the Turing test on its head, and insist that the interrogator should be another computer. Or perhaps there should be two interrogators-a human and a computer-and one witness, and the two interrogators should try to figure out whether the witness is a human or a computer.

In a more serious vein, I personally feel that the Turing test, as originally proposed, is quite reasonable. As for the people who Weizenbaum claims were sucked in by ELIZA, they were not urged to be skeptical, or to use all their wits in trying to determine if the "person" typing to them were human or not. I think that Turing's insight into this issue was sound, and that the Turing test, essentially unmodified, will survive.

A Brief History of AI

I would like in the next few pages to present the story, perhaps from an unorthodox point of view, of some of the efforts at unraveling the algorithms behind intelligence: there have been failures and setbacks and there will continue to be. Nonetheless, we are learning a great deal, and it is an exciting period.

Ever since Pascal and Leibniz, people have dreamt of machines that could perform intellectual tasks. In the nineteenth century, Boole and De Morgan devised "laws of thought"-essentially the Propositional

Calculus-and thus took the first step towards At software; also Charles Babbage designed the first "calculating engine"-the precursor to the hardware of computers and hence of AI.

One could define AI as coming into existence at the moment when mechanical devices took over any tasks previously performable only by human minds. It is hard to look back and imagine the feelings of those who first saw toothed wheels performing additions and multiplications of large numbers. Perhaps they experienced a sense of awe at seeing

"thoughts" flow in their very physical hardware. In any case, we do know that nearly a century later, when the first electronic computers were constructed, their inventors did experience an awesome and mystical sense of being in the presence of another kind of

"thinking being". To what extent real thought was taking place was a source of much puzzlement; and even now, several decades later, the question remains a great source of stimulation and vitriolics.

It is interesting that nowadays, practically no one feels that sense of awe any longer-even when computers perform operations that are incredibly more sophisticated than those which sent thrills down spines in the early days. The once-exciting phrase "Giant Electronic Brain" remains only as a sort of "camp" cliché, a ridiculous vestige of the era of Flash Gordon and Buck Rogers. It is a bit sad that we become blasé so quickly.

There is a related "Theorem" about progress in Al: once some mental function is programmed, people soon cease to consider it as an essential ingredient of "real thinking". The ineluctable core of intelligence is always in that next thing which hasn't yet been programmed. This "Theorem" was first proposed to me by Larry Tesler, so I call it Tesler's Theorem. "Al is whatever hasn't been done vet."

A selective overview of AI is furnished below. It shows several domains in which workers have concentrated their efforts, each one seeming in its own way to require the quintessence of intelligence. With some of the domains I have included a breakdown according to methods employed, or more specific areas of concentration.

mechanical translation

direct (dictionary look-up with some word rearrangement)

indirect (via some intermediary internal language)

game playing

chess

with brute force look-ahead

with heuristically pruned look-ahead

with no look-ahead checkers

go

kalah

bridge (bidding; playing)

poker

variations on tic-tac-toe

etc.

proving theorems in various parts. of mathematics

symbolic logic

"resolution" theorem-proving

elementary geometry

symbolic manipulation of mathematical expressions

symbolic integration

algebraic simplification

summation of infinite series

vision

printed matter:

recognition of individual hand-printed characters drawn

from a small class (e.g., numerals)

reading text in variable fonts reading passages in handwriting

reading Chinese or Japanese printed characters

reading Chinese or Japanese handwritten characters

pictorial:

locating prespecified objects in photographs

decomposition of a scene into separate objects

identification of separate objects in a scene

recognition of objects portrayed in sketches by people

recognition of human faces

hearing

understanding spoken words drawn from a limited vocabulary (e.g., names of the ten digits)

understanding continuous speech in fixed domains finding boundaries between phonemes

identifying phonemes

finding boundaries between morphemes

identifying morphemes

putting together whole words and sentences

understanding natural languages

answering questions in specific domains

parsing complex sentences

making paraphrases of longer pieces of text

using knowledge of the real world in order to understand passages resolving ambiguous references

producing natural language

abstract poetry (e.g., haiku)

random sentences, paragraphs, or longer pieces of text producing output from internal representation of knowledge

creating original thoughts or works of
art

poetry writing (haiku) story writing

computer art

musical composition

atonal

tonal

analogical thinking

geometrical shapes ("intelligence tests")

constructing proofs in one domain of mathematics based on

those in a related domain

learning

adjustment of parameters

concept formation

Mechanical Translation

Many of the preceding topics will not be touched upon in my selective discussion below, but the list would not be accurate without them. The first few topics are listed in historical order. In each of them, early efforts fell short of expectations.

For example, the pitfalls in mechanical translation came as a great surprise to many who had thought it was a nearly straightforward task, whose perfection, to be sure, would be arduous, but whose basic implementation should be easy. As it turns out, translation is far more complex than mere dictionary look-up and word rearranging.

Nor is the difficulty caused by a lack of knowledge of idiomatic phrases. The fact is that translation involves having a mental model of the world being discussed, and manipulating symbols in that model. A program which makes no use of a model of the world as it reads the passage will soon get hopelessly bogged down in ambiguities and multiple meanings. Even people-who have a huge advantage over computers, for they come fully equipped with an understanding of the world-when given a piece of text and a dictionary of a language they do not know, find it next to impossible to translate the text into their own language. Thus-and it is not surprising in retrospect-the first problem of AI led immediately to the issues at the heart of AI.

Computer Chess

Computer chess, too, proved to be much more difficult than the early intuitive estimates had suggested. Here again it turns out that the way humans represent a chess situation in their minds is far more complex than just knowing which piece is on which square, coupled with knowledge of the rules of chess. It involves perceiving configurations of several related pieces, as well as knowledge of heuristics, or rules of thumb, which pertain to

such higher-level chunks. Even though heuristic rules are not rigorous in the way that the official rules are, they provide shortcut insights into what is going on on the board, which knowledge of the official rules does not. This much was recognized from the start; it was simply underestimated how large a role the intuitive, chunked understanding of the chess world plays in human chess skill. It was predicted that a program having some basic heuristics, coupled with the blinding speed and accuracy of a computer to look ahead in the game and analyze each possible move, would easily beat top-flight human players-a prediction which, even after twentyfive years of intense work by various people, still is far from being realized.

People are nowadays tackling the chess problem from various angles. One of the most novel involves the hypothesis that looking ahead is a silly thing to do. One should instead merely look at what is on the board at present, and, using some heuristics, generate a plan, and then find a move which advances that particular plan. Of course, rules for the formulation of chess plans will necessarily involve heuristics which are, in some sense, "flattened" versions of looking ahead. That is, the equivalent of many games' experience of looking ahead is "squeezed" into another form which ostensibly doesn't involve looking ahead. In some sense this is a game of words. But if the "flattened" knowledge gives answers more efficiently than the actual look-ahead-even if it occasionally misleads- then something has been gained. Now this kind of distillation of knowledge into more highly usable forms is just what intelligence excels at-so look-ahead-less chess is probably a fruitful line of research to push. Particularly intriguing would be to devise a program which itself could convert knowledge gained from looking ahead into

"flattened" rules-but that is an immense task.

Other books

Downtown by Anne Rivers Siddons
French Twist by Catherine Crawford
The Mountains Rise by Michael G. Manning
Yours Until Dawn by Teresa Medeiros
High Tide at Noon by Elisabeth Ogilvie
The Christopher Killer by Alane Ferguson