Ghosts in the Machine (The Babel Trilogy Book 2) (39 page)

BOOK: Ghosts in the Machine (The Babel Trilogy Book 2)
11.51Mb size Format: txt, pdf, ePub

With that in mind, let me introduce a more important distinction. In one version or understanding of the Turing test, which at first sight seems closer to Turing’s intention, the test is designed around the question “Is this entity that I’m interacting with just a machine, built to fool me into thinking it’s a human being? Or is it really a human being?” Call this version of the test T1. A different version, which we’ll call T2 in honor of a famous cyborg, is designed around a much broader question: “Is the entity just a machine, built to fool me into thinking it has a mind? Or does it really have a mind?”

I think Turing’s paper shows clearly that he failed to make this distinction. And the distinction matters, because there may be entities that would fail T1 (showing their nonhumanity all too obviously) but still turn out to have a
mind
. What if your new friend, who seemed ordinary and likeable, suddenly glows purple all over, says “Five out of six of me had a really crummy morning,” and then removes the top of her own skull to massage her six-lobed brain? At that point, she fails T1—not because she’s a machine, but because she’s Zxborp Vood, the ambassador from Sirius Gamma.

What this shows is that “imitation game” is a misleading label for what really interests us. So in what follows I’m going to assume we’re talking about T2: not whether the humanoid entity is convincingly human, but whether he/she/it is convincingly
some kind of genuinely conscious intelligence
.

Now, here’s the kicker. To say that an entity
fails
T2 is to say that we know it’s a mere machine—a simulation of a conscious being rather than the real thing. But then, by a simple point of logic that often gets missed,
passing
T2 means only that
we still don’t know, one way or the other
.

That last bit is vital, and people routinely get it wrong, so read it again. OK, don’t, but allow me to repeat it in a different way. Failing T2 establishes the absence of consciousness. (“Trickery detected: it’s merely a device designed to fool us, and we’re not fooled!”) But it doesn’t follow that
passing
T2
establishes
consciousness, or even gives us evidence for its probable presence (“trickery ruled out”). Passing T2 only establishes that the question remains open. In the formal language of logic: “A entails B, and A” entails B. But “A entails B, and not-A” entails exactly squat about B.

With that in mind, suppose it’s the year 2101, and the latest DomestiBots are so convincingly “human” that your grandchildren really have started to think of their new 9000-series DomestiDave as kind and caring. Or happy. Or depressed. Or tormented by a persistent pain in his left shoulder blade.

(As an aside, I should say that I’m skeptical of the common assumption that even this will happen. Our computers can already be programmed do things that everyone in Turing’s day would have counted as impossible for a mere machine—which is to say,
our
computers might well have passed
their
T1. Yet we, having built and spent time with such clever machines, and indeed carried them around in our pockets, aren’t even slightly tempted to think of them as conscious. Whence comes the assumption—present in Turing’s paper, and now virtually universal in fiction, AI, and popular culture generally—that our grandchildren will be more gullible than we are?)

But OK, just suppose our grandchildren really do find themselves ascribing emotions or intentions to their machines. Remember, remember, remember: that will be a psychological report about them, not about their machines.

The 2015 film
Ex Machina
makes explicit the point I’ve hinted at here: in the end, Turing’s “veil” (wall, disguise) is irrelevant in either form. Ava is a robot who’s perfectly capable of passing T2. But her smug inventor, Nathan, already knows that. He wants to find out instead whether his rather feeble-minded employee Caleb will fall for her flirty shtick even when he’s allowed to see from the start that she’s not a beautiful woman but “just” a machine. “The challenge,” Nathan says, “is to show you that she’s a robot—and then see if you still feel she has consciousness.”

In a way the filmmakers perhaps didn’t intend, this awkward line of dialogue exposes the problem at the heart of Turing’s idea and any version of the test. For it’s an interesting
technological
question whether a “Nathan” will ever be capable of building an “Ava.” And, if he does, it’ll be an important
psychological
question whether the world’s “Calebs” will feel that she truly has (and feel compelled to treat her as if she truly has) emotions and intentions. But the far deeper and more troubling question is an ethical one, and (ironically, given the film’s relentless nerdboy sexism) it’s a question about Ava, not Caleb. Never mind what the rather clueless Caleb is emotionally inclined to “feel” about her! Leaving that aside, what does it make sense for us, all things considered, to believe she
is
? On that distinction just about everything hangs—and that’s why Turing’s attitude in his paper, which could be summed up in the phrase “as good as real should be treated as real,” is a fascinating idea about computational intelligence but a wholly and disastrously wrong idea when the issue comes to be, say, whether that pain in the left shoulder blade actually
hurts.

More on this in
The Babel Trilogy, Book Three: Infinity’s Illusion.
As my story will ultimately suggest, I believe that in time, we will come to think of Turing’s ideas about artificial “thinking machines” and mechanical intelligence as a long blind alley in our understanding of the mind.

 

Epigenetics, Hominin, et cetera

Genetics is the study of what changes when the genome changes. Epigenetics is the study of inherited changes in the way genes work (or are “expressed”) that
don’t
depend on changes in the genome. See my note on Jean-Baptiste Lamarck in
The Fire Seekers
; for the full fascinating story, check out Matt Ridley’s
Nature via Nurture
or Nessa Carey’s
The Epigenetics Revolution
.

If you’re confused by
hominid
and
hominin
, welcome to the club. The simple version is this: the great apes (including us) are hominids, and anything in the genus
Homo
(living or extinct, and including us) is a hominin.

 

The FOXP2 “language gene”

The FOX (“fork-headed box”) family of proteins gives its name to the genes that code for it, and FOXP2 is a real protein manufactured by what has been described, misleadingly, as “the language gene.”

People are fond of the idea that there’s a gene for blue eyes, for anemia, for Tay-Sachs disease, et cetera, as if we’re made from a neat stack of children’s blocks. In some cases it’s like that. But a condition like having bad impulse control, or good eyesight, involves many different genes. And what really makes it complicated is that (see note on epigenetics) we all carry genes that may or may not get switched on. Even environmental factors, like nutrition and radiation, can switch a gene on or off. And that’s what FOXP2 does: shaped like a box with a pair of antlers, it’s a transcription factor, affecting whether other genes work or not.

The much-studied “KE” family in England is real. About half of them have difficulty understanding sentences where word order is crucial, and show the same tendency to leave off certain initial sounds—for example, saying “able” for “table.” A paper published in 2001 identified a mutation in FOXP2 as the culprit.

 

Babblers share a different mutation on the FOXP2 gene

As far as I know, there’s no evidence for a genetic mutation to explain giftedness in languages, and most stories about such giftedness are exaggerated. On the one hand, there are cultures where most people can get by in several languages, and being able to get by in four or five is quite common; on the other hand, there are few people anywhere who maintain full mastery of more than about five languages at any one time.

There’s a fascinating tour through the world of hyperpolyglots (actual ones, not Babblers) in Michael Erard’s
Babel No More
.

 

Scanner . . . “this is low resolution, compared with what we can do”

For a sense of how far away this still is, you might take a look at the short video
Neuroscience: Crammed with Connections
, at
https://youtu.be/8YM7-Od9Wr8
. My own suspicion is that we’re way, way, way farther from “complete brain emulation” than even this suggests. (See the note on the Bekenstein bound.)

 

Language: “a crazy thing that shouldn’t exist”

Anyone who knows the literature about “Wallace’s Problem,” as it’s sometimes called, will detect here the influence of linguist Derek Bickerton. See in particular
Adam’s Tongue
, in which he argues that, despite misleading similarities, phenomena such as animal warning cries, songs, and gestures have essentially nothing to do with the abstract features underlying human language.

Paradoxically, humans are good at underrating the intellectual, social, and emotional sophistication of other animals—especially when doing so makes it easier to eat them or mistreat them—while being real suckers for the romantic idea that we might one day learn to “talk” to them. Chances are we never will talk to them, because they’re just too cognitively distant from us.

One aspect of that distance is particularly telling. Much has been made of the fact that elephants and some other species pass the “mirror recognition test.” But nearly all animals, even the most intelligent, fail another superficially easy test. Think how routine it is for humans, even young children, to follow another’s pointing hand, and thus demonstrate their ability to make the inference “Ah, she’s paying attention to something that she wants me to pay attention to.” Human infants start to “get” this when they are as little as nine to fourteen months old. Michael Tomasello, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, has pointed out how striking it is that our closest genetic cousins, the chimpanzees, absolutely never get it. They have many cognitive abilities we once thought they lacked, yet even adult chimps definitively lack this mark of “shared intentionality.” That may explain a further critical difference: aside from the exception that some chimps occasionally cooperate to hunt monkeys, nonhuman primates generally lack the human ability to form groups dedicated to cooperating in pursuit of a common goal.

In
The Origin of Consciousness in the Breakdown of the Bicameral Mind
, Julian Jaynes makes a broader point that may be rooted in this cognitive difference. “The emotional lives of men and of other animals are indeed marvelously similar. But . . . the intellectual life of man, his culture and history and religion and science, is different from anything else we know of in the universe. That is fact. It is as if all life evolved to a certain point, and then in ourselves turned at a right angle and simply exploded in a different direction.”

The big question is why. For at least a partial answer, check out the TED talk “Why Humans Run the World” (and the book
Sapiens
) by Yuval Noah Harari. For something more technical, specifically on Wallace’s Problem (“How could language ever have evolved?”), see Derek Bickerton’s
More Than Nature Needs
.

Oh, but wait: here’s a key point on the other side of the cognitive debate. There is at least one species with highly sophisticated “shared intentionality” that routinely does “get” the pointing gesture, perhaps because of its inherently social nature or perhaps because it has spent thousands of years (possibly tens of thousands of years) coevolving with us:
Canis lupus familiaris
, otherwise known as the dog.

For one fascinating possible consequence of that coevolution, see the note “Neanderthals . . . went extinct not much later.”

 

“The Neanderthals had bigger brains than we do”

It’s true, just. The later Neanderthals—and their
Homo sapiens
contemporaries, around fifty thousand years ago—were equipped with about 1,500 cc of neuronal oatmeal, on average, whereas we get by on about 1,350 to 1,400 cc. Again, this is on average: the “normal” ranges for the two species are surprisingly large, and overlap—and arguably the differences vanish completely when you take into account body size and other factors.

 

“We have complete genomes for . . .”

Not yet. We have essentially complete genomes for some Paleolithic
Homo sapiens
, at least one late Neanderthal (a female who died in Croatia approximately forty thousand years ago), and one Denisovan—even though all we have of the entire Denisovan species is a finger bone and a few teeth. (All people of European descent have some Neanderthal DNA; some people of Melanesian, Polynesian, and Australian Aboriginal decent have some Denisovan DNA.) Intriguingly, the Denisovan genome suggests they interbred with
H. sapiens
,
H. neanderthalensis
, and yet another, unidentified human species.

We have nothing yet for the Red Deer Cave people and don’t even know for sure that they’re a separate species. Some experts have suggested that they were the result of interbreeding between Denisovans and
H. sapiens
, but a recently rediscovered thigh bone, dated to fourteen thousand years old, suggests that the Red Deer Cave people are, like
H. floresiensis
perhaps, a long-surviving remnant of a more primitive population, probably
H. erectus
.

The bit about FOXQ3 is pure invention—a claim that some knowledgeable readers may find hard to believe. I had Natazscha “discover” it in a draft of this chapter that I wrote while reading some of the research on FOXP2 in early 2015. But there really is a gene called FOXO3 (associated with human longevity, no less, and of great interest to the real-world people I make some fun of here as the “Extenders”). I found out about the real FOXO3, quite by chance, more than a year after I’d invented FOXQ3.

Other books

Thief: A Bad Boy Romance by Aubrey Irons
The Spy Who Left Me by Gina Robinson
Safety by Viola Rivard
Sue-Ellen Welfonder - [MacLean 03] by Wedding for a Knight
Murder Take Two by Charlene Weir
Rogue Powers by Stern, Phil
A Magic King by Jade Lee
Teacher Man: A Memoir by Frank McCourt