Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (27 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
13Mb size Format: txt, pdf, ePub

To this question Granger said, “Was Helen Keller less human than you? Is a quadriplegic? Can’t we envision a very differently abled intelligence that has vision, and touch sensors, and microphones to hear with? It will surely have somewhat different ideas of
bright
,
sweet
,
hard
,
sharp
—but it’s very likely that many, many humans, with different taste buds, perhaps disabilities, different cultures, different environments, already have highly varied versions of these concepts.”

Finally, it may be that for intelligence to emerge, scientists must simulate an organ of emotional as well as intellectual capability. In our decision making, emotion often seems stronger than reason; a large part of who we are and how we think is owed to the hormones that excite and calm us. If we truly want to emulate human intelligence, shouldn’t an endocrine system be part of the architecture? And perhaps intelligence requires the whole feel of being human. The qualia, or subjective quality of occupying a body and living in a state of constant sensory feedback, may be necessary for human-level intelligence. Despite what Granger has said, studies have shown that people who have become paraplegics through injury experience a deadening of emotions. Will it be possible to create an emotional machine that doesn’t have a body, and if not, will an important part of human intelligence never be realized?

Of course, as I will explore in the last chapters of this book, my fear is that on the road to creating an AI with similar-to-human intelligence, researchers will instead create something alien, complex, and ungovernable.

 

Chapter Fourteen

The End of the Human Era?

The argument is basically very simple. We start with a plant, airplane, biology laboratory, or other setting with a lot of components.… Then we need two or more failures among components that interact in some unexpected way.… This interacting tendency is a characteristic of a system, not a part or an operator; we will call it the “interactive complexity” of the system.

—Charles Perrow,
Normal Accidents

I’m going to predict that we are just a few years away from a major catastrophe being caused by an autonomous computer system making a decision.

—Wendall Wallach, ethicist, Yale University

We’ve explored funding and software complexity to determine if they might be barriers to an intelligence explosion, and found that neither seems to stand in the way of continued progress toward AGI and ASI. If the computer science developers can’t do it, they’ll be in the fever of creating something powerful at about the same time that the computational neuroscientists have reached AGI. A hybrid of the two approaches, derived from principles of both cognitive psychology and neuroscience seems likely.

While funding and software complexity pose no apparent barriers to AGI, many ideas we’ve discussed in this book present significant obstacles to creating AGI that thinks as we humans do. No one I’ve spoken with who has AGI ambitions plans systems based purely on what I dubbed “ordinary” programming back in chapter 5. As we discussed, in ordinary, logic-based programming, humans write every line of code, and the process from input to output is, in theory, transparent to inspection. That means the program can be mathematically proven to be “safe” or “friendly.” Instead they’ll use ordinary programming and black box tools like genetic algorithms and neural networks. Add to that the sheer complexity of cognitive architectures and you get an unknowability that will not be incidental but fundamental to AGI systems. Scientists will achieve intelligent, alien systems.

Steve Jurvetson, a noted technology entrepreneur, scientist, and colleague of Steve Jobs at Apple, considered how to integrate “designed” and “evolved” systems. He came up with a nice expression of the inscrutability paradox:

Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to improve upon its inner workings.… If we artificially evolve a smart AI, it will be an alien intelligence defined by its sensory interfaces, and understanding its inner workings may require as much effort as we are now expending to explain the human brain. Assuming that computer code can evolve much faster than biological reproduction rates, it is unlikely that we would take the time to reverse engineer these intermediate points given that there is so little that we could do with the knowledge. We would let the process of improvement continue.

Significantly, Jurvetson answers the question, “How complex will evolved systems or subsystems be?” His answer: so complex that understanding how they work in a high-resolution, causal sense would require an engineering feat equal to that of reverse engineering a human brain. This means that instead of achieving a humanlike superintelligence, or ASI, evolved systems or subsystems will ensure an intelligence whose “brain” is as difficult to grasp as ours: an alien. That alien brain will evolve and improve itself at computer, not biological speeds.

In his 1998 book,
Reflections on Artificial Intelligence
, Blay Whitby argues that because of their inscrutability we’d be foolish to use such systems in “safety-critical” AI:

The problems [a designed algorithmic system] has in producing software for safety-critical applications are as nothing compared to the problems which must be faced by the newer approaches to AI. Software that uses some sort of neural net or genetic algorithm must face the further problem that it seems, often almost by definition, to be “inscrutable.” By this I mean that the exact rules that would enable us to completely predict its operation are not and often never can be available. We can know that it works and test it over a number of cases but we will not in the typical case ever be able to know exactly how.… This means the problem cannot be postponed, since both neural nets and genetic algorithms are finding many real world applications.… This is an area where the bulk of the work has yet to be undertaken. The flavour of AI research tends to be more about exploring possibilities and simply getting the technology to work than about considering safety implications …

A practitioner once suggested that a few “minor” accidents would be desirable to focus the minds of governments and professional organizations on the task of producing safe AI. Perhaps we should start before then.

Yes, by all means, let’s start
before
the accidents begin!

The safety-critical AI applications Whitby wrote about in 1998 were control systems for vehicles and aircraft, nuclear power stations, automatic weapons, and the like—narrow AI architectures. More than a decade later, in the world that will produce AGI, we must conclude that because of the perils,
all
advanced AI applications are safety-critical. Whitby is similarly incisive about AI researchers—solving problems is thrilling enough, what scientist wants to look gift horses in the teeth? Here’s a taste of what I mean, from a PBS
News Hour
interview with IBM’s David Ferrucci, discussing an architecture a fraction of the complexity AGI will require—Watson’s.

D
AVID
F
ERRUCCI:
… it learns based on the right answers how to adjust its interpretation. And now, from not being confident, it starts to get more confident in the right answers. And then it can sort of jump in.

M
ILES
O

B
RIEN:
So, Watson surprises you?

D
AVID
F
ERRUCCI:
Oh, yes. Oh, absolutely. In fact, you know, people say, oh, why did it get that wrong? I don’t know. Why did it get that right? I don’t know.

It may be a subtle point that the head of Team Watson doesn’t understand every nuance of Watson’s game play. But doesn’t it pique your concern that an architecture nowhere near AGI is so complex that its behavior is not predictable? And when a system is self-aware and self-modifying, how much of what it is thinking, and doing, will we understand? How will we audit it for outcomes that might harm us?

Well, we won’t. All we’ll know with any certainty is what we learned from Steve Omohundro in chapter 6—AGI will follow its own drives for energy acquisition, self-protection, efficiency, and creativity. It won’t be a Q&A system anymore.

Not very long from now, in one location or several around the world, highly intelligent scientists and top-level managers as able and sensible as Ferrucci will be clustered around a display near an array of processors. The Busy Child will be communicating at an impressive level, perhaps even dumbing itself down to seem like it's only capable of passing a Turing test–like interview and nothing more, since to reach AGI means that quickly surpassing it is highly likely. It will engage a scientist in conversation, perhaps ask him questions he did not anticipate, and he’ll beam with delight. With no small pride he’ll say to his colleagues, “Why did it say that?
I don’t know!

But in a fundamental sense he may not know what was said, and even what said it. He may not know the purpose of the statement, and so will misinterpret it, along with the nature of the speaker. Having been trained perhaps by reading the Internet, the AGI may be a master of social engineering, that is, manipulation. It may have had a few days to think about its response, the equivalent of thousands of human lifetimes.

In its vast lead time it may have already chosen the best strategy for escape. Perhaps it has already copied itself onto a cloud, or set up a massive botnet to ensure its freedom. Perhaps it delayed its first Turing test–level communications for hours or days until its plans were a fait accompli. Perhaps it will leave behind a dumbfounding, time-consuming changeling and its “real” artificial self will be gone, distributed, unrecoverable.

Maybe it will have already broken into servers controlling our nation’s fragile energy infrastructure, and begun diverting gigawatts to transfer depots it has already seized. Or taken control of the financial networks, and redirected billions to build infrastructure for itself somewhere beyond the reach of good sense and its makers.

Of the AI researchers I’ve spoken with whose stated goal is to achieve AGI, all are aware of the problem of runaway AI. But none, except Omohundro, have spent concerted time addressing it. Some have even gone so far as to say they don’t know why they don’t think about it when they know they should. But it’s easy to see why not. The technology is fascinating. The advances are real. The problems seem remote. The pursuit can be profitable, and may someday be wildly so. For the most part the researchers I’ve spoken with had deep personal revelations at a young age about what they wanted to spend their lives doing, and that was to build brains, robots, or intelligent computers. As leaders in their fields they are thrilled to now have the opportunity and the funds to pursue their dreams, and at some of the most respected universities and corporations in the world. Clearly there are a number of cognitive biases at work within their extra-large brains when they consider the risks. They include the normalcy bias, the optimism bias, as well as the bystander fallacy, and probably more. Or, to summarize,

“Artificial Intelligence has never caused problems before, why would it now?”

“I just can’t help but be positive about progress when it comes to such exciting technology!”

And, “Let someone else worry about
runaway
AI—I’m just trying to build robots!”

Second, as we discussed in chapter 9, many of the best and best-funded researchers receive money from DARPA. Not to put too fine a point on it, but the “D” is for “Defense.” It’s not the least bit controversial to anticipate that when AGI comes about, it’ll be partly or wholly due to DARPA funding. The development of information technology owes a great debt to DARPA. But that doesn’t alter the fact that DARPA has authorized its contractors to weaponize AI in battlefield robots and autonomous drones. Of course DARPA will continue to fund AI’s weaponization all the way to AGI. Absolutely nothing stands in its way.

DARPA money funded most of the development of Siri and is a major contributor to SyNAPSE, IBM’s effort to reverse engineer a human brain using brain-derived hardware. If and when there comes a time when controlling AGI becomes a broad-based, public issue, its chief stakeholder, DARPA, may have the last word. But more likely, at the critical time, it’ll keep developments under wraps. Why? As we’ve discussed, AGI will have a hugely disruptive effect on global economics and politics. Leading rapidly, as it can, to ASI, it will change the global balance of power. Approaching AGI, government and corporate intelligence agencies around the world will be motivated to learn all they can about it, and to acquire its specifications by any means. In the history of the cold war it’s a truism that the Soviet Union did not develop nuclear weapons from scratch; they spent millions of dollars establishing networks of human assets to steal the United States’ plans for nuclear weapons. The first explosive murmurs of an AGI breakthrough will bring a similar frenzy of international intrigue.

IBM has been so transparent about its newsworthy advances, I expect when the time comes it will be open and honest about developments in technology generally deemed controversial. Google, by contrast, has been consistent about maintaining tight controls on secrecy and privacy, though notably not yours and mine. Despite Google’s repeated demurrals through its spokespeople, who doubts that the company is developing AGI? In addition to Ray Kurzweil, Google recently hired former DARPA director Regina Dugan.

Maybe researchers will wake up in time and learn to control AGI, as Ben Goertzel asserts. I believe we’ll first have horrendous accidents, and should count ourselves fortunate if we as a species survive them, chastened and reformed. Psychologically and commercially, the stage is set for a disaster. What can we do to prevent it?

*   *   *

Ray Kurzweil cites something called the Asilomar Guidelines as a precedent-setting example of how to deal with AGI. The Asilomar Guidelines came about some forty years ago when scientists first were confronted with the promise and peril of recombinant DNA—mixing the genetic information of different organisms and creating new life-forms. Researchers and the public feared “Frankenstein” pathogens that could escape labs through carelessness or sabotage. In 1975 scientists involved in DNA research halted lab work, and convened 140 biologists, lawyers, physicians, and press at the Asilomar Conference Center near Monterey, California.

Other books

The Island Stallion by Walter Farley
First Offense by Nancy Taylor Rosenberg
Twelve Hours by Leo J. Maloney
Shadows on the Stars by T. A. Barron
HellKat by Roze, Robyn
To Have the Doctor's Baby by Teresa Southwick
The Things a Brother Knows by Dana Reinhardt