Terminator and Philosophy: I'll Be Back, Therefore I Am (35 page)

Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

BOOK: Terminator and Philosophy: I'll Be Back, Therefore I Am
12.28Mb size Format: txt, pdf, ePub
 
21
For another view of how utilitarianism can help us understand the
Terminator
saga, see “What’s So Terrible about Judgment Day?” by Wayne Yuen, in this volume.
 
22
This is just what the utilitarian philosopher Peter Singer does in his famous book
Animal Liberation
(New York: Avon, 1975).
 
23
It’s no accident that it’s in this scene, just before the robot teaches him a life lesson, that John happens to mention that the terminator was “about the closest thing to a father” he ever had. It’s also no accident that Dr. Silberman, symbol of Freudian psychology, shows up. Along similar lines, Kate and John’s future warrior-marriage is subsequently symbolized when John supplants her false fiancé with himself, carrying her off in a hearse that transforms from a death wagon into a convertible, dragging noisy bits of things like a twisted wedding limo, which they soon exchange for a family recreational vehicle, complete with symbolic gear for their kids. It is, indeed, at that chairotic moment for Kate in the cemetery that she switches from trying to escape John to joining him in his struggle.
 
24
Aristotle makes a similar point; see
Politics
(Book I, 1253a27-33).
 
PART FIVE
 
BEYOND THE NEURAL NET
 
17
 
“YOU GOTTA LISTEN TO HOW PEOPLE TALK”: MACHINES AND NATURAL LANGUAGE
 
Jacob Berger and Kyle Ferguson
 
 
Terminators are incredibly lifelike machines. Not only do they look like humans, but they also have extraordinary knowledge of how to kill, how to protect, and how to use weapons. Beyond all that, they have incredible linguistic abilities. Remarkably, Terminators can communicate with human beings using natural languages like English. In
Terminator 2: Judgment Day
, the T-1000 doesn’t just throw the pilot out of the helicopter during the battle at Cyberdyne Systems, he commands him to “get out!” When Sarah tells the T-101 to keep their car at a certain speed, it understands the message and responds:
 
Sarah: Keep it under 65.
 
T-101: Affirmative.
 
John: No no no no no, you gotta listen to how people talk. Now you don’t say “affirmative” or some shit like that. You gotta say “no problemo.” And if some guy comes up to you with an attitude and you want to shine them on, it’s “
Hasta la vista,
baby.”
 
T-101:
Hasta la vista,
baby.
 
 
Near the end of
T2
, we see that the T-101 has learned this particular language lesson, as it uses the now-famous phrase before shattering the frozen baddie, the T-1000. But John’s remarks in the dialogue above are insightful. While it looks as though the T-101 has a working command of English, the machine struggles with certain aspects of the language as it’s used in communication. Its diction is rigid and forced. Worse, it sometimes just doesn’t understand what people mean. The T-101 communicates like, well, a robot.
 
When Skynet designed the Terminators, it must have operated under certain assumptions about the nature of language, meaning, and communication. These assumptions also shape our approach to designing language-using machines in real-world artificial intelligence research today. So the question is this: how could we design a machine—that is, a computational system—so that it could produce and comprehend statements of natural languages like English, German, Swahili, or Urdu?
1
 
In order to answer this difficult question, designers must face issues familiar to philosophers of language. Philosophy of language deals with questions like, What is language? What is meaning? And how do things like marks on surfaces (such as notes on paper or images on a computer screen) and sounds in the air become meaningful? What do you know when you know a language? What occurs in linguistic communication? What obstacles must be overcome for this kind of communication to succeed?
 
The answers to these questions make up what we’ll call a
linguistic communication theory
. If Skynet had no linguistic communication theory, it could not have even begun to design or to program a machine that could use language to communicate or that could carry out missions in a linguistic environment.
 
Think about it. So much of our everyday experience is submerged in language. We look to signs to find our way around, we write reminders to ourselves of places to be and things to do, and we read newspapers to learn about events we’ve never witnessed in places we’ve never been. Weather, traffic, and sports reports pour from our radios, and the sounds of conversation fill up nearly every public space. It is rare, if not impossible, to find oneself in a social situation where language is absent. Skynet sent Terminators to this language-infused world and knew they would need to be able to work their way around with words.
 
“My CPU Is a Neural Net Processor”: The Code Model and Language
 
So, what linguistic communication theory might Skynet have used when it designed its army of badass gun-toting, English-speaking Terminators? One obvious choice is a theory known as the
Code Model
.
2
One reason why the Code Model makes sense as Skynet’s theory is that the developers of the model, Claude Shannon and Warren Weaver, created it as a way to understand how machines, so to speak, communicate. Claude Shannon was an electrical engineer concerned with information transmission in circuit systems. Warren Weaver worked as a consultant to the United States military and its defense contractors to solve tactical problems, including how to make information transfer more reliable on the battlefield.
3
 
According to the Code Model, the answer to the question “What is a language?” is that language is a kind of
code
—that is, a collection of signals and corresponding pieces of information. The answer to the philosopher of language’s question “What is meaning?” is that the meaning of a given signal is the
information encoded in the signal
. The answer to the question of “How does communication happen?” is that communication occurs when a signaler—the
producer
of a particular signal—encodes information into a signal, and the receiver—the
consumer
of the signal—decodes the signal, thereby gaining the encoded information.
 
This may sound sort of complicated, but it’s actually quite simple. Basically, the idea is that information is packed into a signal by a producer; the signal is emitted to, and received by, the consumer; and the consumer then unpacks the information from the signal. If all goes well, the consumer ends up with the same information that the producer originally sent. As long as the producer and consumer share the same code and no “noise” interferes with the signal, the successful transmission of information via signals—that is, communication—is guaranteed.
 
As an example of this, think of the early scene in
T2
when the T-101 tells John that the T-1000 is going to kill Sarah. John immediately attempts to leave in order to find her in time, but the Terminator grabs him. As he struggles to break free, John sees two guys across the street.
 
John (to the two guys): Help! Help! I’m being kidnapped! Get this psycho off of me!
 
John (to the Terminator): Let go of me!
 
(The T-101 immediately lets go of John, who falls to the ground)
 
John: Ow! Why’d you do that?
 
T-101: You told me to.
 
 
Okay, so what’s going on here? According to the Code Model, John’s signal (the sentence “Let go of me!”) had certain information encoded, or packed inside, and the Terminator, since it was programmed with the same code, was able to decode, or unpack, the signal and to acquire the information it contained and respond appropriately.
 
So what did the T-101 do to decode John’s signal? The Code Model suggests that it did two things. First, the Terminator recognized the sounds coming from John’s mouth as signals. Then, it retrieved information matching these signals from its neural net processor (its mind, so to speak). In order to do this, the Terminator would need to be programmed with what linguists call a
lexicon
and a
syntax
of a given language. A lexicon is a complete set of meaningful units of a given language, usually words. Think of a lexicon as a “dictionary” of a code, a dictionary that matches individual signals with bits of information or words with their meanings. Syntax (or syntactical rules) specifies how items from the lexicon are combined; this is what people usually think of as “grammar.” By recognizing the lexical items and the syntax of the sentence, the Terminator was able to decode the signal and receive the information it contained. And since the Terminator was programmed to do as John commands, it let John go . . .
literally
.
 
We can now return to our initial question: How do we design a machine that can produce and comprehend statements in a natural human language? If we accept the Code Model as our linguistic communication theory, we can give an elegantly simple and straightforward answer. All that Skynet needs to do in order to ensure that its army of man-destroying Terminators is capable of understanding and producing English sentences is simply program into the Terminators’ neural net processors the lexicon and syntax of English. It’s that easy. If the T-1000 has the lexicon and syntax for some language, it should be able to understand when people beg it not to kill them and then make quips right before it shoves stabbing weapons into their brains.
 
Why the Terminator Has to Listen to How People Talk
 
Our guess is that Skynet did indeed use the Code Model as its linguistic communication theory when it designed the Terminator.
4
But this is not to say that the Code Model is a good theory of linguistic communication. In fact, it’s a flawed theory, failing to capture how people actually communicate using language. Its shortcomings explain why the Terminator isn’t so hot at sounding like a normal English-speaking human, and why it sometimes doesn’t grasp what normal English-speaking humans mean. The T-101 says, “Affirmative” when it should probably say, “No problemo,” and it drops John on the ground when he tells it to let him go, when it probably should have just set him down. The Terminator fails where the Code Model fails.
5
 
The problem is that linguistic communication isn’t as straightforward as the Code Model says it is. Basic obstacles arise when people stumble over words, run words together, speak with accents, mumble, and more. Schwarzenegger’s thick Austrian accent makes it hard for the movie-watcher to understand what he says. If audiences had to make out every word that Arnie said in order to understand him, the better part of
T2
, most of the original
Terminator
, and every single one of his gubernatorial speeches would be nearly incomprehensible. Just watch
Kindergarten Cop
again if you need to refresh your memory.
 
The Code Model regards these sorts of problems as
noise
. Accents, mumbling, and other imperfections are like “static” that corrupts or interferes with the signal and makes it hard for the consumer to acquire the information it contains. We bet Skynet could have designed Terminators so that they could deal with this sort of noise.
 
But deeper problems than noise abound for the Code Model. Put simply, the word meanings and the order of the words of a sentence are rarely, if ever,
enough
to give an interpreter access to what a speaker is trying to communicate. For the sake of simplicity, we’ll refer to all of these sorts of complicating features as the
pragmatic
aspects of language.
6
Let’s consider some examples.
 
Pragmatic aspects of natural languages include, for instance,
lexical ambiguity
. Consider the quote we reprised at the beginning of the chapter. John says to the Terminator, “And if some guy comes up to you with an attitude and you want to shine them on, it’s
‘Hasta la vista,
baby.’” The verb phrase “to shine” has multiple meanings. It can mean “to polish,” “to emit light rays,” “to excel,” and other things. In this case, John uses it as slang to mean “to give someone a hard time.” Linguists call words or phrases that have multiple meanings
lexically ambiguous
.
7
If a person says a sentence that includes a lexically ambiguous word or phrase, it’s not always clear how to interpret that sentence. How is the Terminator supposed to know whether John’s sentence means that the Terminator is supposed to say “
Hasta la vista,
baby” to people to whom it wants to give a hard time, or if it means that the Terminator should say the sentence to people on whom it wants to shine a flashlight? Lexical ambiguities make trouble for the Code Model because hearers have no way of resolving an ambiguous signal by appealing to the code itself. If the Terminator were simply assigning pieces of information to John’s signal, it would have no clear basis on which to choose one assignment over another.

Other books

Fight by Sarah Masters
An Experienced Mistress by Bryn Donovan
The Tale of Peter Rabbit by Beatrix Potter
Stewart's Story by Ruth Madison
A Tale of Three Kings by Edwards, Gene
The Client by John Grisham
I spit on your graves by Vian, Boris, 1920-1959