Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (31 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
10.79Mb size Format: txt, pdf, ePub

If technologists and defense experts operating in the White House and the NSA cannot control a narrowly intelligent piece of malware, what chance do their counterparts have against future AGI or ASI?

No chance at all.

*   *   *

Cyber experts play war games that feature cyberattacks, creating disaster scenarios that seek to teach and to provoke solutions. They’ve had names like “Cyberwar” and “Cyber Shockwave.” Never, however, have war-gamers suggested that our wounds would be self-inflicted, although they will be in two ways. First, as we’ve discussed, the United States cocreated the Stuxnet family, which could become the AK-47s of a never-ending cyberwar: cheap, reliable, and mass-produced. Second, I believe that damage from AI-grade cyberweapons will come from abroad, but also from home.

Compare the dollar costs of terrorist attacks and financial scandals. Al Qaeda’s attacks of 9/11 cost the United States some $3.3 trillion, if you count the wars in Afghanistan and Iraq. If you don’t count those wars, the direct costs of physical damage, economic impact, and beefed up security is nearly $767 billion. The subprime mortgage scandal that caused the worst global downturn since the Great Depression cost about $10 trillion globally, but around $4 trillion at home. The Enron scandal comes in at about $71 billion, while the Bernie Madoff fraud cost almost as much, at $64.8 billion.

These numbers show that in dollar cost per incident, financial fraud competes with the most expensive terrorist act in history, and the subprime mortgage crisis dwarfs it. When researchers put advanced AI into the hands of businessmen, as they imminently will, these people will suddenly possess the most powerful technology ever conceived of. Some will use it to perpetrate fraud. I think the next cyberattack will consist of “friendly fire,” that is, it’ll originate at home, damage infrastructure, and kill Americans.

Sound far-fetched?

Enron, the scandal-plagued Texas corporation helmed by Kenneth Lay (since deceased), Jeffrey Skilling, and Andrew Fastow (both currently in prison), was in the energy trading business. In 2000 and 2001, Enron traders drove up energy prices in California by using strategies with names like “Fat Boy,” and “Death Star.” In one ploy, traders increased prices by secretly ordering power-producing companies to shut down plants. Another plan endangered lives.

Enron held rights to a vital electricity transmission line connecting Northern and Southern California. In 2000, by overloading the line with subscribers during a heat wave, they created “phantom” or fake congestion, and a bottleneck in energy delivery. Prices skyrocketed, and electricity became critically scarce. California officials supplied energy to some regions while darkening others, a practice called “rolling blackouts.” The blackouts caused no known deaths but plenty of fear, as families became trapped in elevators, and streets were lit only by headlights. Apple, Cisco, and other corporations were forced to shut down, at a loss of millions of dollars.

But Enron made millions. During the blackouts one trader was recorded saying, “Just cut ’em off. They’re so f——d. They should just bring back f——g horses and carriages, f——g lamps, f——g kerosene lamps.”

That trader is now an energy broker in Atlanta. But the point is, if Enron’s executives had had access to smart malware that would have let them turn off California’s energy, do you think they would have hesitated to use it? Even if it meant damage to grid hardware and loss of life, I think not.

 

Chapter Sixteen

AGI 2.0

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.

—Ray Kurzweil, inventor, author, futurist

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

—George Dyson, historian

The more time I spend with AI makers and their work, the sooner I think AGI will get here. And I’m convinced that when it does its makers will discover it’s not what they had meant to create when they set out on their quest years before. That’s because, while its intelligence may be human level, it won’t be humanlike, for all the reasons I’ve described. There’ll be a lot of clamor about introducing a new species to the planet. It will be thrilling. But gone will be talk of AGI being the next evolutionary step for Homo sapiens, and all that it implies. In important ways we simply won’t grasp what it is.

In its domain, the new species will be as fleet and strong as Watson is in its. If it coexists with us at all as our tool, it will nevertheless extend its tendrils into every nook of our lives the way Google and Facebook would like to. Social media might turn out to be its incubator, its distribution system, or both. If it is a tool first, it will have answers while we’re still formulating questions, and then, answers for itself alone. Throughout, it won’t have feelings. It won’t have our mammalian origins, our long brain-building childhood, or our instinctive nurturing, even if it is raised as a simulacrum of a human from infancy to adulthood. It probably won’t care about you any more than your toaster does.

That’ll be AGI version 1.0. If by some fluke we avoid an intelligence explosion and survive long enough to influence the creation of AGI 2.0, perhaps it could be imbued with feelings. By then scientists might have figured out how to computationally model feelings (perhaps with 1.0’s help) but feelings will be secondary objectives, after primary moneymaking goals. Scientists might explore how to train those synthetic feelings to be sympathetic to our existence. But 1.0 is probably the last version we’ll see because we won’t live to create 2.0. Like natural selection, we choose solutions that work first, not best.

Stuxnet is an example of that. So are autonomous killing drones. With DARPA funds, scientists at Georgia Tech Research Institute have developed software that allows unmanned vehicles to identify enemies through visual recognition software and other means, then launch a lethal drone strike against them. All without a human in the loop. One piece I read about it includes this well-intentioned sop: “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.”

I’m reminded of the old saw, “When was a weapon ever invented that wasn’t used?” A quick Google search revealed a scary list of weaponized robots all set up for autonomous killing and wounding (one made by iRobot wields a Taser), just waiting for the go-ahead. I imagine these machines will be in use long before you and I know they are. Policy makers spending public dollars will not feel they require our informed consent any more than they did before recklessly deploying Stuxnet.

As I worked on this book I made the request of scientists that they communicate in layman’s terms. The most accomplished already did, and I believe it should be a requirement for general conversations about AI risks. At a high or overview level, this dialogue isn’t the exclusive domain of technocrats and rhetoricians, though to read about it on the Web you’d think it was. It doesn’t require a special, “insider” vocabulary. It does require the belief that the dangers and pitfalls of AI are everyone’s business.

I also encountered a minority of people, even some scientists, who were so convinced that dangerous AI is implausible that they didn’t even want to discuss the idea. But those who dismiss this conversation—whether due to apathy, laziness, or informed belief—are not alone. The failure to explore and monitor the threat is almost society-wide. But that failure does not in the least impact the steady, ineluctable growth of machine intelligence. Nor does it alter the fact that we will have just one chance to establish a positive coexistence with beings whose intelligence is greater than our own.

 

Notes

1: THE BUSY CHILD

AI theorists propose it is possible to determine:
Omohundro, Stephen, “The Nature of Self-Improving Artificial Intelligence.” January 21, 2008,
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
(accessed February 2, 2010).

Two grandmasters said the same thing:
Amis, Martin, “Amis on Hitchens: ‘He’s one of the most terrifying rhetoricians the world has seen.’”
The Observer
, April 21, 2011.

Repurposing the world’s molecules:
Drexler, Eric K.,
Engines of Creation
(New York: Doubleday, 1987), 58.

The waste heat produced:
Vassar and Frietas, “Lifeboat Foundation NanoShield Version 0.90.2.13,” August 2006,
http://lifeboat.com/ex/nano.shield
(accessed February 9, 2011).

Someday soon, perhaps within your lifetime:
Good, I. J., “Speculations Concerning the First Ultraintelligent Machine,” in Franz L. Alt and Morris Rubinoff, eds.,
Advances in Computers
, vol. 6 (New York: Academic Press, 1965), 31–88.

However, scientists do believe:
Omohundro, Stephen, “The Nature of Self-Improving Artificial Intelligence.”
It does not have to hate us:
Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” August 31, 2006,
http://intelligence.org/files/.pdf
(accessed March 29, 2013).

not just another technology:
Bostrom, Nick. Oxford University, “Ethical Issues in Advanced Artificial Intelligence,” 2003,
http://www.nickbostrom.com/ethics/ai.html
(accessed March 1, 2013).
Superintelligence is radically different:
Ibid.

Fifty-six countries:
“Trend Report 2011: From Battlefield to Safe Urban Transport,”
Robotland
(blog), April 3, 2011,
http://robotland.blogspot.com/2011/04/trend-report-2011-from-battlefield-to.html
(accessed October 4, 2011).

2: THE TWO-MINUTE PROBLEM

It needs ways to manipulate objects:
The name “busy child” has two parents, as far as I know. One is a 1548 letter from England’s Princess Elizabeth to the pregnant Catherine Parr. Elizabeth laments how sick Parr has become due to the “busy child” churning inside her. She’d later die in childbirth. The other is an unofficial online backstory about the Terminator series. “Busy Child” in this instance is an AI that’s about to achieve consciousness.

They think there’s a better than 10 percent chance:
Goertzel, Ben, Seth Baum, and Ted Goertzel, “How Long Till Human-Level AI.”
H
+
Magazine
, February 5, 2010,
http://hplusmagazine.com/2010/02/05/how-long-till-human-level-ai/
(accessed March 4, 2010).
Furthermore, experts claim:
Sandburg Anders, and Nick Bostrom, “Machine Intelligence Survey,” 2011,
http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf
(accessed December 4, 2011).

the science of cognitive biases:
Kahneman, Daniel, Paul Slovic, and Amos Tversky,
Judgment under Uncertainty: Heuristics and Biases
(Cambridge: Cambridge University Press, 1982), 11.

fire ranks well down the list:
Centers for Disease Control and Prevention, “Accidents or Unintentional Injuries,” March 28, 2011,
http://www.cdc.gov/nchs/fastats/acc-inj.htm
(accessed April 4, 2011).
But by choosing fire:
Kahneman, et al.,
Judgment under Uncertainty
, 11.

Engineering at an atomic scale:
Bostrom, Nick, “Ethical Issues in Advanced Artificial Intelligence,” 2003,
http://www.nickbostrom.com/ethics/ai.html
(accessed April 4, 2011).

Major climate change:
Lewis, H. W.,
Technological Risk
(New York: W.W. Norton & Company, 1992), 13–14.

3: LOOKING INTO THE FUTURE

He’s the president:
Until January 2013, the Machine Intelligence Research Institute was named the Singularity Institute, and before that, the Singularity Institute for Artificial Intelligence. For simplicity, I’ll always refer to the organization as the Machine Intelligence Research Institute, or MIRI.
Once a year it organizes:
Beginning in 2013, the Singularity Summit will be organized by Singularity University.

Three stealth companies:
Rubin, Courtney, “How to Get Money from Founders Fund,”
Inc.,
July 12, 2011,
http://www.inc.com/courtney-rubin/how-to-get-founders-fund-backing.html
(accessed August 28, 2012).

People always make the assumption:
Memepunks, “Google A.I. a Twinkle in Larry Page’s Eye,” May 26, 2006,
http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html
(accessed May 3, 2011).

Even the Google camera cars:
Streitfeld, David, “Google Is Faulted for Impeding U.S. Inquiry on Data Collection,”
New York Times
, sec. technology, April 14, 2012,
http://www.nytimes.com/2012/04/15/technology/google-is-fined-for-impeding-us-inquiry-on-data-collection.html
(accessed May 3, 2012).

It doesn’t take Google glasses:
In December 2012, Ray Kurzweil joined Google as Director of Engineering to work on projects involving machine learning and language processing. In the development of AGI, this is a landmark event, and a sobering one. Kurzweil aims to reverse engineer a brain, and has even written a book about it, 2012’s
How to Create a Mind: The Secret of Human Thought Revealed
. Now he has Google’s vast resources to spend making this dream come true. By hiring the esteemed inventor, Google has chosen to no longer hide their AGI ambitions.

Kasparov normally thinks:
Peterson, Ivan, “Calculation and the Chess Master,”
Ivars Peterson’s MathTrek
(blog). 1996,
http://www.maa.org/mathland/mathland1.html
(accessed May 5, 2011).
Then Deep Blue would go:
FAQ, “Deep Blue,” May 11, 1997.
http://www.research.ibm.com/deepblue/meet/html/d.3.3.html
(accessed May 5, 2011).

Other books

The Experiment by Costanza, Christopher
Dead End Job by Vicki Grant
Women of Pemberley by Collins, Rebecca Ann
Harsens Island by T. K. Madrid
Rise of the Lost Prince by London Saint James