Read The Most Human Human Online

Authors: Brian Christian

The Most Human Human (28 page)

BOOK: The Most Human Human
6.31Mb size Format: txt, pdf, ePub
ads

Amazingly, the program only has a single conversational path—a nicely site-specific one, having been written the night before the contest about a current event—and all of its text-parsing and programming finesse is aimed at keeping the conversation on it. The feeling of eeriness that I’d felt reading that first conversation disappeared; it was actually quite simple indeed, and when it crashed and burned, it did so rather spectacularly. But when it worked, it
really
worked.

Another judge—Clay—gets taken for the same ride as Elliott, waxing political, musing over the state of the nation, and really engaging the questions. In fact everything is going swimmingly until the very end, when Clay’s signing off:

JUDGE
:
it looks like i’ve got to go, Catherine. its been nice talking with you

JUDGE
:
do you need water or something before the 5 minute rounds?

At which point the program erupts into gibberish.

REMOTE
:
OK, yes on balance … Time to get off this one I think and onto something more down to earth!

JUDGE
:
like what?

REMOTE
:
Almost certainly! let’s move on

REMOTE
:
I suppose it depends on where you’re coming from, but as the song goes in My Fair Lady, “Why can’t a woman be more like a man?”

Defeat from the jaws of victory.

The whole thing got me thinking about conversation, and storytelling in particular. There was a program called “Joe the Janitor” in 1995—written by 1994-winner Thomas Whalen of the Canadian Communications Research Centre—that took a similar tack, adopting the persona of a janitor in danger of being fired after being wrongly accused of stealing from his office. The template matching was sophisticated enough that references to “unions” would prompt him to gripe about his union and so on—a level of responsiveness above Catherine—and the “script” itself was quite engaging and sympathy inducing. Joe’s depressed and obsessing demeanor—reasonable, given his situation—helped make his topic monomania seem plausible:

JUDGE
:
ever bneen to austraila?

REMOTE
:
Huh?

REMOTE
:
Maybe I just got too much on my mind.

REMOTE
:
I got real trouble at work. I think I’m going to get fired.

And, like many depressed people, he didn’t take the judges’ suggestions for ways to help the situation, or attempts to lighten his spirits, to heart. He just kept depressively spieling.

Hot-Blooded and Cold-Blooded

The second chatbot to achieve major prominence within the AI literature, after ELIZA in 1966, was a bot written in 1972 called PARRY. In many ways, PARRY is ELIZA’s opposite: where ELIZA mimicked a therapist, PARRY mimicked a patient—specifically, a paranoid schizophrenic. Where ELIZA was satirical, a “parody” of a therapist (in Weizenbaum’s words), PARRY—written by Kenneth Colby, a Stanford psychiatrist—was an earnest attempt to understand and model paranoia.

Stylistically, PARRY is also ELIZA’s inverse. While ELIZA practiced the “reflective listening” of Carl Rogers and his school of “client-centered therapy,” PARRY (after which Catherine was modeled) took the lead—and kept it. When these two programs, in the early 1970s, were made to talk with each other, these differences were clear as day. Note how the attempt to change topic is unceremoniously rebuffed:

PARRY
:
I went to the track at Bay Meadows a while back.

ELIZA
:
What does that suggest to you?

PARRY
:
I went to the races.

ELIZA
:
What else would you like to discuss?

PARRY
:
Horseracing interests me.

In this conversation, and in general, ELIZA seems only to view itself as a kind of verbal mirror, an opportunity for its interlocutor to
speak. PARRY seems only to view its interlocutor as an opportunity for
it
to speak. If conversation were tennis, the former is a practice wall, and the latter, a serving machine. Neither really needs to engage or react—or even move.

As such, they perfectly illustrate the Scylla and Charybdis of conversation: at one extreme is ELIZA, what you could call the “reptilian” or “cold-blooded” approach—“not enough me.” At the other is PARRY, “too much me,” what could be called the “hot-blooded” or “mammalian” approach. As cognitive neuroscientist Michael Gazzaniga writes, “Vocal communication from the chimp’s point of view may just be ‘It’s all about me,’ which when you think about it isn’t all that different from many human dates.”

Speaking of dates: Arguably the two most prominent “pickup artists” of the past twenty years, Mystery and Ross Jeffries, fall into the same dichotomy. Mystery, star of
The Game
as well as VH1’s
The Pickup Artist
, was a magician in his twenties; he first learned the gift of gab as
patter:
a way to hold and direct a person’s attention while you run them through a routine. “Looking back on the women I have shared intimate moments with,” he writes, “I just talked their ears off on the path from meet to sex … I don’t talk about her. I don’t ask many questions. I don’t really expect her to have to say much at all. If she wants to join in, great, but otherwise, who cares? This is my world, and she is in it.” This is the performer’s relationship to his audience.

At the other extreme is the therapist’s relationship to his client. Ross Jeffries, arguably the most famous guru of attraction before Mystery,
1
draws his inspiration not from stage magic but from the same field that inspired ELIZA: therapy. Where Mystery speaks mostly in the first person, Jeffries speaks mostly in the second. “I’m gonna tell you something about yourself,” he begins a conversation with one woman. “You make imagery in your mind, very, very vividly; you’re a very vivid
daydreamer.” Where Mystery seems perhaps a victim of solipsism, Jeffries seems almost to
induce
it in others.

Jeffries’s approach to language comes from a controversial psychotherapeutic and linguistic system developed in the 1970s by Richard Bandler and John Grinder called Neuro-Linguistic Programming (NLP). There’s an interesting and odd passage in one of the earliest NLP books where Bandler and Grinder speak disparagingly of talking about oneself. A woman speaks up at one of their seminars and says, “If I’m talking to someone about something that I’m feeling and thinking is important to me, then …”

“I don’t think that will produce connectedness with another human being,” they respond. “Because if you do that you’re not paying attention to
them
, you’re only paying attention to
yourself.
” I suppose they have a point, although connectedness works both ways, and so introspection could still connect
them
to
us
, if not vice versa. Moreover, language at its best requires both the speaker’s motive for speaking
and
their consideration of their audience. Ideally, the other is in our minds even when we talk about ourselves.

The woman responds, “OK. I can see how that would work in therapy, being a therapist. But in an intimate relationship,” she says, it doesn’t quite work. I think that’s true. The therapist—in some schools of therapy, anyway—wants to remain a cipher. Maybe the interviewer does too.
Rolling Stone
interviewer Will Dana, in
The Art of the Interview
, advises: “You want to be as much of a blank screen as possible.” David Sheff remarked to me that perhaps “the reason I did so many interviews is that it was always more comfortable to talk about other people more than talk about myself.”
2
In an interview situation, there’s not necessarily anything wrong with being a blank screen. But a
friend
who absents himself from the friendship is a bit of a jerk. And
a lover who wants to remain a cipher is sketchy in both senses of the word: roughly outlined, and iffy.

The Limits of Demonstration

If poetry represents the most
expressive
way of using a language, it might also, arguably, represent the most
human
. Indeed, there’s a sense in which a computer poet would be much scarier of a prospect to contend with than a computer IRS auditor
3
or a computer chess player. It’s easy to imagine, then, the mixture of skepticism, intrigue, and general discomfort that attended the publication, in 1984, of the poetry volume
The Policeman’s Beard Is Half Constructed:
“The First Book Ever Written by a Computer”—in this case, a program called Racter.

But as both a poet and a programmer, I knew to trust my instincts when I read
The Policeman’s Beard Is Half Constructed
and felt instantly that something was fishy.

I’m not the only one who had this reaction to the book; you still hear murmurs and grumbles, twenty-five years after its publication, in both the literary and AI communities alike. To this day it isn’t fully known how exactly the book was composed. Racter itself, or some watered-down version thereof, was made available for sale in the 1980s, but the consensus from people who have played around with it is that it’s far from clear how it could have made
The Policeman’s Beard
.

More than iron, more than lead, more
than gold I need electricity
.

I need it more than I need lamb or
pork or lettuce or cucumber
.

I need it for my dreams
.

–RACTER

Programmer William Chamberlain claims in his introduction that the book contains “prose that is in no way contingent upon human experience.” This claim is utterly suspect; every possible aspect of the above “More than iron” poem, for instance, represents the human notion of meaning, of grammar, of aesthetics, even of what a computer might say if it could express itself in prose. Wittgenstein famously said, “If a lion could speak, we could not understand him.” Surely the “life” of a computer is far less intelligible to us than that of a lion, on biological grounds; the very intelligibility of Racter’s self-description begs scrutiny.

Its structure and aesthetics, too, raise doubts about the absence of a human hand. The anaphora of the first sentence (“more … more … more”) is brought into a nice symmetry with the polysyndeton (“or … or … or”) of the second. The lines also embody the classic architecture of jokes and yarns: theme, slight variation, punch line. These are human structures. My money—and that of many others—says Chamberlain hard-coded these structures himself.

A close structural reading of the text raises important questions about Racter’s authorship, as does asking whether the notion of English prose severed from human experience is even a comprehensible idea. But setting these aside, the larger point might be that
no
“demonstration” is impressive, in the way that no prepared speech will ever tell you for certain about the intelligence of the person reciting it.

Some of the earliest questions that come to mind about the capacities of chatbots are things like “Do they have a sense of humor?” and “Can they display emotions?” Perhaps the simplest answer to this type of question is “If a novel can do it, they can do it.” A bot can tell jokes—because jokes can be written for it and it can display them. And it can convey emotions, because emotion-laden utterances can be written in for it to display as well. Along these lines, it can blow your mind, change your mind, teach you something, surprise you. But it doesn’t make the
novel
a person.

In early 2010, a YouTube video appeared on the Internet of a man
having a shockingly cogent conversation with a bot about Shakespeare’s
Hamlet
. Some suspected it might herald a new age for chatbots, and for AI. Others, including myself, were unimpressed. Seeing sophisticated behavior doesn’t necessarily indicate a
mind
. It might just indicate a
memory
. As Dalí so famously put it, “The first man to compare the cheeks of a young woman to a rose was obviously a poet; the first to repeat it was possibly an idiot.”

For instance, three-time Loebner Prize winner Richard Wallace recounts an “AI urban legend” in which “a famous natural language researcher was embarrassed … when it became apparent to his audience of Texas bankers that the robot was consistently responding to the
next
question he was about to ask … [His] demonstration of natural language understanding … was in reality nothing but a simple script.”

No demonstration is ever sufficient
.

Only
inter
action will do.

We so often think of intelligence, of AI, in terms of
sophistication
of behavior, or
complexity
of behavior. But in so many cases it’s impossible to say much with certainty about the program itself, because there are any number of different pieces of software—of wildly varying levels of “intelligence”—that could have produced that behavior.

No, I think sophistication, complexity of behavior, is not it at all. Computation theorist Hava Siegelmann offhandedly described intelligence as “a kind of sensitivity to things,” and all of a sudden it clicked—that’s it! These Turing test programs that hold forth, these prefabricated poem templates, may produce interesting output, but they’re
static
, they don’t
react
. They are, in other words,
insensitive
.

Deformation as Mastery

In his famous 1946 essay “Politics and the English Language,” George Orwell says that any speaker repeating “familiar phrases” has “gone
some distance towards turning himself into a machine.” The Turing test would seem to corroborate that.

UCSD’s computational linguist Roger Levy: “Programs have gotten relatively good at what is actually said. We can devise complex new expressions, if we intend new meanings, and we can understand those new meanings. This strikes me as a great way to break the Turing test [programs] and a great way to distinguish yourself as a human. I think that in my experience with statistical models of language, it’s the unboundedness of human language that’s really distinctive.”
4

BOOK: The Most Human Human
6.31Mb size Format: txt, pdf, ePub
ads

Other books

Sharp Edges by Jayne Ann Krentz
Soul Blaze by Legacy, Aprille
Alien Tryst by Sax, Cynthia
Heart of War by John Masters
Fairytales by Cynthia Freeman
The Kingdom of Gods by N. K. Jemisin
Looking for Alaska by Peter Jenkins
B00724AICC EBOK by Gallant, A. J.