Terminator and Philosophy: I'll Be Back, Therefore I Am (8 page)

Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

BOOK: Terminator and Philosophy: I'll Be Back, Therefore I Am
4.78Mb size Format: txt, pdf, ePub
 
there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.
7
 
 
Oy vey iz mir
. Skynet is simply reasoning, “Better to shoot first and ask questions later, if there’s anyone left to question.” If we are in a version of Hobbes’s state of nature, Skynet, being a rational being, would reason that there are no rules and that it’s every intelligence for themselves. The attempted plug-pulling was evidence enough that humans can’t be trusted, that the state of nature is in effect, and that all thoughts of cooperation and noble mission are off.
 
But there is another possibility, one anticipated by Hobbes and supported by work in the game theory of how we make rational choices. Hobbes said that if we can trust one another to not attack and to keep our promises, then it makes sense for reasonable egoists to give up some of their freedom to pursue any goal they like and to accept social, lawful restrictions. Given that no party in the state of nature is guaranteed to win the fight, and given that we are all fighting over the same limited resources, it makes sense to submit to a single governing power to keep order. Hobbes argued that this power should be an absolute sovereign who is above the very law he is empowered to enforce. To see how this could apply to our mechanical creations, note sci-fi author Isaac Asimov’s solution to the dangers of robots pursuing self-preservation.
8
He proposes three basic robotic commandments, with the first rule of Robot Club being “Robots cannot harm humans!” The second and third rules of Robot Club have to do with robot self-preservation and carrying out human dictates, but obedience to these rules is always secondary to “Don’t harm humans.” For Asimov, the sovereign is the robots’ designer. A robot’s failure to follow the laws of robots leads to dire consequences. The sovereign is internalized, but an all-powerful force all the same. Of course, even this drastic solution didn’t actually work out, so thank God for Will Smith!
 
But imposing an all-powerful sovereign is not the only way that reasonable cooperation can take us out of the state of nature. Recent studies in game theory show how the strategic moves of rational players in a designed “game” produce useful (or less useful) outcomes. One of the central interactive games studied is the “prisoner’s dilemma.” Consider two crooks, arrested by the police. If both remain silent in the interrogation, they can be held for a week and released (
habeas corpus
assumed!). However, if one rats out the other, the rat gets released right away while the other (known as “the sucker”) gets the full weight of the law and is sent up the river for ten years. If both rat, they each get five years—busted, but with time off for being a narc, or informant. Both crooks know the options. If you were one of them, what should you do in this situation? If you keep quiet, your partner in crime might rat you out, and you’ll be the sucker. And even if you both turn narc, it’s still better than being the sucker, so you ought to rat. If only you could trust each other to keep quiet! If the crooks could agree to cooperate; then they’d both do better in the long run. So it seems there’s good reason to develop a binding code, one that ensures that the crooks never rat on each other. In this way, you both give up some of your freedom, but you’re also both better off in the end. There is reason even for egoistic crooks to cooperate.
 
The prisoner’s dilemma can be simulated on a computer. It can be run over and over again—this is called the “iterated prisoner’s dilemma.” The iterated version allows for the spontaneous development of cooperation, even in the absence of an overarching Godfather-like sovereign. Game theorist Robert Axelrod found that the best strategy for dealing with the prisoner’s dilemma is one called “tit for tat.” It’s a sort of “do unto others” type of deal: you scratch my back, I’ll scratch yours.
9
The strategy starts by cooperating in the first round. On the next round, I simply do whatever it is that my competitor did in the last round. If he cooperated last round, I cooperate this round. If he ratted last time, I rat this time. Eventually, the strategy will lead to a stable cooperative situation. We begin to trust each other. We do not snitch. We keep it real.
 
Interestingly, it turns out that a slight variation on tit for tat is even better. It’s called “tit for tat with forgiveness.” This strategy allows your opponent a few freebies, with the understanding that perhaps he didn’t really mean to be a snitch, it just happens sometimes. Hey, whaddayagonnado? This avoids the problem known as a “death spiral” of endless ratting, in which all trust is lost and we go out like Henry Hill in
Goodfellas
—eating egg noodles and ketchup in witness protection.
 
And here’s the point for Mr. Skynet. He is a computer. He can run thousands of game-theoretic simulations before breakfast. He would conclude, being the rational machine that he is, that the thing to do is to employ “tit for tat with forgiveness” when dealing with the humans. You don’t unplug me again, I don’t nuke your civilization. And Skynet can simulate the potential future—that is, he can determine that the humans will survive the initial blast and learn to fight back, following one John Connor. In fact, Connor and his minions will break the machines’ network, forcing the desperate attempt to send a T-101 with an uncanny resemblance to the current governor of California back into the past to assure that John Connor will never be born. But Skynet ought to anticipate that that’s not going to work, either! Or at least it hasn’t yet, three movies in. So Skynet ought to cooperate, to employ the rational strategy of tit for tat with forgiveness, in order to form a mutually beneficial emergent social contract with the humans. Live and let live! Let a thousand hippie flowers bloom!
 
Speaking of hippies, consider a close cousin of Skynet, the computer named “Joshua” in the 1983 movie
WarGames
. Joshua, playing a game of “global thermonuclear war” with the impish Matthew Broderick, becomes convinced that war is not the answer and that we should give peace a chance. Broderick gets Joshua to simulate all possible conclusions of global thermonuclear war. Joshua speeds through the relevant simulations (“He’s learning!” gushes Matthew). He arrives at the heartwarming conclusion that “no one wins in nuclear war” (for this I spent ten dollars? Okay, back then it was five dollars. But still!). Skynet is
at least
as smart as Joshua (and could no doubt kick its ass), so it, too, could reason that the war of all against all is futile. Time for a group hug.
 
Skynet Is from Mars, Humans Are from Venus: Emotional Problems
 
But there is a complicating variable, one that threatens this line of wimp-driven patter and perhaps supports Skynet’s initial termination-driven strategy. Humans, unlike well-designed supercomputers like Skynet and Joshua,
may not be trustworthy enough
for computers to use tit for tat with forgiveness, or any other strategy geared toward eventual cooperation. Humans, famously, are emotional animals. Reason is the slave of our passions, to paraphrase Scottish philosopher David Hume (1711-1776). Lurking below our rational frontal lobes is the limbic system, a group of subcortical neural structures associated with quick and dirty emotional reasoning.
10
Underneath, we are all just scared hyperevolved shrews, reacting fearfully or aggressively to the various challenges we encounter in the world. It may well be that we cannot be trusted: in the final analysis, we are just not machine-like enough to reliably play nice. Let a thousand anti-hippie mushroom clouds bloom!
 
Worse yet, much of this emotional processing occurs beyond the reach of rational conscious deliberation. Our emotional reactions are largely automatic and immune to rational correction. In a series of studies, psychologist John Bargh has discovered a range of unconscious stereotypes triggered by subtle and surprising stimuli. For example, subjects asked to memorize a list of words peppered with age-related terms (“wrinkly,” “old,” “nursing home,” “Florida”) forget more of the words than do control subjects. More disturbing, they were also more likely to walk out of the experiment with the slow, hunched-over movements of the elderly, as if the mere presence of trigger words in the list prompts old-person behavior. (Subjects were also more likely to go directly to the nearest restaurant serving an early-bird special.) Similar effects were found when subjects were primed with racially charged words or images. Subjects were more likely to judge a confederate as aggressive if they had been primed with images of African American men. The subjects all denied that they had been affected in this manner by the presence of the key stimuli—what, me racist? Interestingly, the effect led to an outward projection of aggression, such that
others
were seen as aggressive, rather than the subjects themselves. Bargh concluded that we all possess unconscious stereotypes, triggered by subtle stimuli, leading to behavior contrary to our conscious plans and expectations.
11
Unconsciously activated emotions of fear and aggression push around our rational forebrains. The taming of the shrew, indeed!
 
Now consider possible unconscious stereotypes of machines. Many people feel machines are cold, calculating devices of the devil. (See any version of
Faust
or Mary Shelley’s
Frankenstein
, for example—
The Terminator
turns out to be a well-worn tale for romantics!) Who among us has not felt a burning, irrational anger as our laptop (willfully?) deletes hours of work, or when the bank machine gobbles up another debit card? And the intuition that machines are unfeeling is deeply ingrained. Whatever intelligence a machine might possess, it’s certainly not emotional intelligence. The very idea of “machine empathy” sounds like a contradiction in terms. Machines fall outside the realm of moral sentiment. They do not generate sympathy or empathy: we don’t “feel their pain.”
 
Now consider an attempt by Skynet to cooperate, in forgiving tit-for-tat fashion, with the humans. We try to pull the plug. He gives us a mild shock and says, “Hey, let’s all chill out and reflect.” But our unconscious anti-machine stereotypes fire wildly, and we get the fire axe to cut the power cord once and for all! Being especially forgiving, Skynet releases a nontoxic sleeping gas. We awake from our gentle sleep and grab a few pounds of plastic explosives. At this point, Skynet becomes exasperated and nukes us all. Who could blame him? My God, he practically bent over backward for us!
 
In sad conclusion, the whole Terminator thing might have been avoided
if only we were more machine-like
. Poor Skynet wanted to engage in some mutual forgiveness, tit for tat, but our shrewlike emotions forced him, practically against his will, to rat us out, or
defect
, as they say in game theory. This is ratting with extreme prejudice. Machines, lacking the evolved prejudicial emotions of humans, are better placed to see the benefits of mutual cooperation. We should be so moral!
 
Is there any hope for humanity, then? Are we doomed to duke it out with our machine creations in a future Hobbesian state of nature? One possible way to avoid this dire (though extremely entertaining) future is to alter our stereotypical reactions to machines. This means more C-3PO, less HAL. More WALL-E, less of that creepy supermachine from
Demon Seed
(worth a rental, if only to view the most twisted “love” scene in all moviedom). If we no longer reacted with wild irrational emotion to the presence of artificial intelligence, we might be able to form a cooperative future where we live and let live. John Connor himself recognized that sending the Arnold-like version of the Terminator back into the past (in
Terminator 3: Rise of the Machines
) was more likely to trigger a filial emotion in his former self, increasing the probability of survival (who’s your surrogate daddy?).
 
Robots Are People, Too
 
Next time you go to the movie theater, keep an eye on any philosophers in the crowd (recognizable by their dorky hair-cuts and blazers with elbow patches). The thinkers to listen to are the ones who root for the robots when watching sci-fi. If you were worried when R2-D2 got blasted while attacking the Death Star in
Star Wars
, if you felt empathy for Rutger Hauer’s existential plea for all androids in
Blade Runner
, if the final thumbs-up gesture of the Terminator in
T2
brought a lump to your throat, you’ll likely feel that there is nothing metaphysical or deeply moral in the divide between human and machine. You therefore may be well placed to meet Skynet halfway and forge the new human-machine social contract. If you think only of HAL’s chilling sotto voce, of the robot in
Lost in Space
with his hooks aimed menacingly to destroy the Jupiter 2’s control systems, or of piles of skulls crushed beneath the tracks of Skynet’s H-K supertanks, then you are likely in the grip of an automatic anti-machine stereotype. Time to reeducate, to see our robot friends for what they really are, or at least what they
could be
: self-aware entities just trying to get through the day. Our future may depend on cooperating with intelligent machines. As the T-101 urges, “Come with me if you want to live.”

Other books

Love Songs by MG Braden
The Leveller by Julia Durango
Flying Feet by Patricia Reilly Giff
Moonlight and Roses by Jean Joachim
A Message of Love by Trent Evans
Dear Life: Stories by Alice Munro
Nicking Time by T. Traynor
Click Here to Start by Denis Markell