Some proponents of the Turing Test endorse it because they believe that passing the Turing Test provides good evidence that the machine thinks. After all, if human behavior convinces us that humans think, then why shouldn’t the same behavior convince us that machines think? Other proponents of the Turing Test endorse it because they think it’s
impossible
for a machine that can’t think to pass the test. In other words, they believe that given what is meant by the word “think,” if a machine can pass the test, then it thinks.
There is no question that the machines of the
Terminator
universe can pass versions of the Turing Test. In fact, to some degree, the events of all three
Terminator
movies are a series of such tests that the machines pass with flying colors. In
The Terminator
, the Model T-101 (Big Arnie) passes for a human being to almost everyone he meets, including three muggers (“nice night for a walk”), a gun-store owner (“twelve-gauge auto-loader, the forty-five long slide”), the police officer attending the front desk at the station (“I’m a friend of Sarah Connor”), and to Sarah herself, who thinks she is talking to her mother on the telephone (“I love you too, sweetheart”). The same model returns in later movies, of course, displaying even higher levels of ability. In
T2,
he passes as “Uncle Bob” during an extended stay at the survivalist camp run by Enrique Salceda and eventually convinces both Sarah and John that he is, if not a human, at least a creature that thinks and feels like themselves.
The model T-1000 Terminator (the liquid metal cop) has an even more remarkable ability to pass for human. Among its achievements are convincing young John Connor’s foster parents and a string of kids that it is a police officer and, most impressively, convincing John’s foster father that it is his wife. We don’t get to see as much interaction with humans from the model T-X (the female robot) in
T3
, though we do know that she convinces enough people that she is the daughter of Lieutenant General Robert Brewster to get in to see him at a top security facility during a time of national crisis. Given that she’s the most intelligent and sophisticated Terminator yet, it is a fair bet that she has the social skills to match.
Of course, not all of these examples involved very complex interactions, and often the machines that pass for a human only pass for a
very strange
human. We should be wary of making our Turing Tests too easy, since a very simple Turing Test could be passed even by something like Sarah Connor’s and Ginger’s answering machine. After all, when it picked up, it played: “Hi there . . . fooled you! You’re talking to a machine,” momentarily making the T-101 think that there was a human in the room with him. Still, there are enough sterling performances to leave us with no doubt that Skynet has machines capable of passing a substantial Turing Test.
There is a lot to be said for using the Turing Test as our standard. It’s plausible, for example, that our conclusions as to which things think and which things don’t shouldn’t be based on a double standard that favors biological beings like us. Surely human history gives us good reason to be suspicious of prejudices against outsiders that might cloud our judgment. If we accept that a machine made of meat and bones, like us, can think, then why should we believe that thinking isn’t something that could be done by a machine composed of living tissue over a metal endoskeleton, or by a machine made of liquid metal? In short, since the Terminator robots can behave like thinking beings well enough to pass for humans, we have solid evidence that Skynet and its more complex creations can in fact think.
3
“It’s Not a Man. It’s a Machine.”
Of course, solid evidence isn’t the same thing as proof. The Terminator machines’ behavior in the movies
justifies
accepting that the machines can think, but this doesn’t eliminate all doubt. I believe that something could behave like a thinking being without actually
being
one.
You may disagree; a lot of philosophers do.
4
I find that the most convincing argument in the debate is John Searle’s famous “Chinese room” thought experiment, which in this context is better termed the “Austrian Terminator” thought experiment, for reasons that will become clear.
5
Searle argues that it is possible to behave like a thinking being without actually
being
a thinker. To demonstrate this, he asks us to imagine a hypothetical situation in which a man who does not speak Chinese is employed to sit in a room and sort pieces of paper on which are written various Chinese characters. He has a book of instructions, telling him which Chinese characters to post out of the room through the out slot in response to other Chinese characters that are posted into the room through the in slot. Little does the man know, but the characters he is receiving and sending out constitute a conversation in Chinese. Then in walks a robot assassin! No, I’m joking; there’s no robot assassin.
Searle’s point is that the man is behaving like a Chinese speaker from the perspective of those outside the room, but he still doesn’t understand Chinese. Just because someone—or some
thing
—is following a program doesn’t mean that he (or it) has any understanding of what he (or it) is doing. So, for a computer following a program, no output, however complex, could establish that the computer is thinking.
Or let’s put it this way. Imagine that inside the Model T-101 cyborg from
The Terminator
there lives a very small and weedy Austrian, who speaks no English. He’s so small that he can live in a room inside the metal endoskeleton. It doesn’t matter why he’s so small or why Skynet put him there; who knows what weird experiments Skynet might perform on human stock?
6
Anyway, the small Austrian has a job to do for Skynet while living inside the T-101. Periodically, a piece of paper filled with English writing floats down to him from Big Arnie’s neck. The little Austrian has a computer file telling him how to match these phrases of English with corresponding English replies, spelled out phonetically, which he must sound out in a tough voice. He doesn’t understand what he’s saying, and his pronunciation really isn’t very good, but he muddles his way through, growling things like “Are you Sarah Cah-naah?,” “Ahl be bahk!,” and “Hastah lah vihstah, baby!”
7
The little Austrian can see into the outside world, fed images on a screen by cameras in Arnie’s eyes, but he pays very little attention. He likes to watch when the cyborg is going to get into a shootout or drive a car through the front of a police station, but he has no interest in the mission, and in fact, the dialogue scenes he has to act out bore him because he can’t understand them. He twiddles his thumbs and doesn’t even look at the screen as he recites mysterious words like “Ahm a friend of Sarah Ca-hnaah. Ah wahs told she wahs heah.”
When the little Austrian is called back to live inside the T-101 in
T2
, his dialogue becomes more complicated. Now there are extended English conversations about plans to evade the Terminator T-1000 and about the nature of feelings. The Austrian dutifully recites the words that are spelled out phonetically for him, sounding out announcements like “Mah CPU is ah neural net processah, a learning computah” without even wondering what they might mean. He just sits there flicking through a comic book, hoping that the cyborg will soon race a truck down a busy highway.
The point, of course, is that the little Austrian doesn’t understand English. He doesn’t understand English despite the fact that he is conducting complex conversations
in English
. He has the behavior down pat and can always match the right English input with an appropriate Austrian-accented output. Still, he has no idea what any of it means. He is doing it all, as we might say, in a purely
mechanical
manner.
If the little Austrian can behave like the Terminator without understanding what he is doing, then there seems no reason to doubt that a machine could behave like the Terminator without understanding what it is doing. If the little Austrian doesn’t need to understand his dialogue to speak it, then surely a Terminator machine could also speak its dialogue without having any idea what it is saying. In fact, by following a program, it could do anything while
thinking
nothing at all.
You might object that in the situation I described, it is the Austrian’s computer file with rules for matching English input to English output that is doing all the work and it is the computer file rather than the Austrian that understands English. The problem with this objection is that the role of the computer file could be played by a written book of instructions, and a written book of instructions just isn’t the sort of thing that can understand English. So Searle’s argument against thinking machines works: thinking behavior does not prove that real thinking is going on.
8
But if thinking doesn’t consist in producing the right behavior under the right circumstances, what could it consist in? What could still be missing?
“Skynet Becomes Self-Aware at 2:14 AM Eastern Time, August 29th.”
I believe that a thinking being must have certain
conscious experiences
. If neither Skynet nor its robots are conscious, if they are as devoid of experiences and feelings as bricks are, then I can’t count them as thinking beings. Even if you disagree with me that experiences are required for true thought, you will probably agree at least that something that never has an experience of any kind cannot be a
person
. So what I want to know is whether the machines
feel
anything, or to put it another way,
I want to know whether there is anything that it feels like to be a Terminator.
Many claims are made in the
Terminator
movies about a Terminator’s experiences, and there is lot of evidence for this in the way the machines behave. “Cyborgs don’t feel pain. I do,” Reese tells Sarah in
The Terminator
, hoping that she doesn’t bite him again. Later, he says of the T-101, “It doesn’t feel pity or remorse or fear.” Things seem a little less clear-cut in
T2
, however. “Does it hurt when you get shot?” young John Connor asks his T-101. “I sense injuries. The data could be called pain,” the Terminator replies. On the other hand, the Terminator says he is not afraid of dying, claiming that he doesn’t feel any emotion about it one way or the other. John is convinced that the machine can learn to understand feelings, including the desire to live and what it is to be hurt or afraid. Maybe he’s right. “I need a vacation,” confesses the T-101 after he loses an arm in battle with the T-1000. When it comes time to destroy himself in a vat of molten metal, the Terminator even seems to sympathize with John’s distress. “I’m sorry, John. I’m sorry,” he says, later adding, “I know now why you cry.” When John embraces the Terminator, the Terminator hugs him back, softly enough not to crush him.
As for the T-1000, it, too, seems to have its share of emotions. How else can we explain the fact that when Sarah shoots it repeatedly with a shotgun, it looks up and slowly waves its finger at her? That’s gloating behavior, the sort of thing motivated in humans by a feeling of smug superiority. More dramatically yet, when the T-1000 is itself destroyed in the vat of molten metal, it bubbles with screaming faces as it melts. The faces seem to howl in pain and rage with mouths distorted to grotesque size by the intensity of emotion.
In
T3
, the latest T-101 shows emotional reactions almost immediately. Rejecting a pair of gaudy star-shaped sunglasses, he doesn’t just remove them but takes the time to crush them under his boot. When he throws the T-X out of a speeding cab, he bothers to say “Excuse me” first. What is that if not a little Terminator joke? Later, when he has been reprogrammed by the T-X to kill John Connor, he seems to fight some kind of internal battle over it. The Terminator advances on John, but at the same time warns him to get away. As John pleads with it, the Terminator’s arms freeze in place; the cyborg pounds on a nearby car until it is a battered wreck, just before deliberately shutting himself down. This seems less like a computer crash than a mental breakdown caused by emotional conflict. The T-101 even puts off killing the T-X long enough to tell it, “You’re terminated,” suggesting that the T-1000 was not the first Terminator designed to have the ability to gloat.
As for the T-X itself, she makes no attempt to hide her feelings. “I like your car,” she tells a driver, just before she throws her out and takes it. “I like your gun,” she tells a police officer, just before she takes that. She licks Katherine Brewster’s blood slowly, as if enjoying it, and when she tastes the blood of John Connor, her face adopts an expression of pure ecstasy. After she loses her covering of liquid metal, the skeletal robot that remains roars with apparent hatred at both John and the T-101, seeming less like an emotionless machine than an angry wild animal.
We don’t want to be prejudiced against other forms of life just because they aren’t made of the same materials we are. And since we wouldn’t doubt that a human being who behaved in these ways has consciousness and experiences, we have good evidence that the Terminator robots (and presumably Skynet itself) have consciousness and experiences. If we really are justified in believing that the machines are conscious, and if consciousness really is a prerequisite for personhood, then that’s good news for those of us who are hoping that the end of humanity doesn’t mean the end of people on Earth. Good evidence isn’t proof, however.