Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (23 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
5.14Mb size Format: txt, pdf, ePub

“Imagine a very near future when you don’t forget anything because the computer remembers,” said former Google CEO Eric Schmidt. “You are never lost. You are never lonely.” With the introduction of a virtual assistant as capable as Siri on the iPhone, the first step of that scenario is in place. In the field of search, Siri has one giant advantage over Google—it provides one answer. Google provides tens of thousands, even millions of “hits,” which may or may not be relevant to your search. In a limited number of domains—general search, finding directions, finding businesses, scheduling, e-mailing, texting, and updating social network profiles—Siri tries to determine the context of and meaning of your query, and give you the one, best answer. Not to mention that Siri
listens
to you, adding voice recognition to advanced mobile search. She speaks her answers. And, purportedly, she
learns.
According to patents recently filed by Apple, soon Siri will interact with online retailers to purchase items such as books and clothing, and even take part in online forums and customer support calls.

Don’t look now, but we’ve just passed a
huge
milestone in our own evolution. We’re conversing with machines. This is a change much bigger than GUI, the Graphical User Interface created by DARPA and brought to consumers by Apple (with thanks to Xerox’s Palo Alto Research Center, PARC). The promise of GUI and its desktop metaphor was that computers would work as humans do, with desktops and files, and a mouse that was a proxy for the hand. DOS’ idea was the opposite—to work with computers you had to learn their language, one of inflexible commands typed by hand. Now we are somewhere else entirely. Tomorrow’s technologies will succeed or fail on their ability to learn what we do, and help us do it.

As with GUI, the also-ran operating systems will follow Apple’s liberating innovation, Siri, or perish. And, of course natural language will migrate to desktops and tablets, and before long, to every digital device, including ovens, dishwashers, home heating, cooling, entertainment systems, and cars. Or perhaps all of them will be controlled by that phone in your pocket, which has evolved into a whole other thing. It’s not a virtual assistant, but an assistant
period
, with capabilities that will multiply with accelerating speed. And almost incidentally it has initiated actual dialogue between humans and machines that will last as long as our species does.

But let’s return to the present for a moment and listen to Andrew Rubin, Google’s Senior Vice President of Mobile. If he has his way, Google’s Android operating system won’t join in any virtual assistant games. “I don’t believe that your phone should be an assistant,” Rubin said, in as clear a statement of missing the boat as you’re ever likely to read. “Your phone is a tool for communicating. You shouldn’t be communicating with the phone; you should be communicating with somebody on the other side of the phone.” Someone should gently inform Rubin about the Voice Actions feature that his team has already smuggled into the Android system. They know the future is all about communicating with your phone.

*   *   *

Now, even though you plus Google equals a kind of greater than human intelligence, it’s not the kind that arises from an intelligence explosion, nor does it lead to one. Recall that an intelligence explosion requires a system that is both self-aware and self-improving, and has the necessary computer superpowers—it runs 24/7 with total focus, it swarms problems with multiple copies of itself, it thinks strategically at a blinding rate, and more. Arguably you and Google together comprise a special category of superintelligence, but your growth is limited by you and Google. You can’t provide queries for Google anything close to 24/7, and Google, while saving you time on research, wastes your time by forcing you to pick through too many answers searching for the best. And even working together the odds are you’re not much of a programmer, and Google can’t program at all. So even if you could see the holes in your combined systems, your attempts to improve them would probably not be good enough to make incremental advances, then do it again. No intelligence explosion for you.

Could intelligence augmentation (IA)
ever
deliver an intelligence explosion? Certainly, on about the same time line as AGI. Just imagine a human, an elite programmer, whose intelligence is so powerfully augmented that her already formidable programming skills are made better—faster, more knowledgeable, and attuned to improvements that would increase her overall intellectual firepower. This hypothetical post-human could program her next augmentation.

*   *   *

Back to software complexity. By all indications, computer researchers the world over are working hard to assemble the combustible ingredients of an intelligence explosion. Is software complexity a terminal barrier to their success?

One can get a sense of how difficult AGI’s software complexity problem is by polling the experts about how soon we can expect AGI’s arrival. At one end of the scale is Peter Norvig, Google’s Director of Research, who as we discussed, doesn’t care to speculate beyond saying AGI is too distant to speculate about. Meanwhile, his colleagues, led by Ray Kurzweil, are proceeding with its development.

At the other end, Ben Goertzel, who, as Good did, thinks achieving AGI is merely a question of cash, says that before 2020 isn’t too soon to anticipate it. Ray Kurzweil, who’s probably the best technology prognosticator ever, predicts AGI by 2029, but doesn’t look for ASI until 2045. He acknowledges hazards but devotes his energy to advocating for the likelihood of a long snag-free journey down the digital birth canal.

My informal survey of about two hundred computer scientists at a recent AGI conference confirmed what I’d expected. The annual AGI Conferences, organized by Goertzel, are three-day meet-ups for people actively working on artificial general intelligence, or like me who are just deeply interested. They present papers, demo software, and compete for bragging rights. I attended one generously hosted by Google at their headquarters in Mountain View, California, often called the Googleplex. I asked the attendees when artificial general intelligence would be achieved, and gave them just four choices—by 2030, by 2050, by 2100, or not at all? The breakdown was this: 42 percent anticipated AGI would be achieved by 2030; 25 percent by 2050; 20 percent by 2100; 10 percent by 2100, and 2 percent never. This survey of a self-selected group confirmed the optimistic tone and date ranges of more formal surveys, one of which I cited in chapter 2. In a written response section I got grief for not including an option for dates
before
2030. My guess is that perhaps 2 percent of the respondents would’ve estimated achieving AGI by 2020, and another 2 percent even sooner. I used to be stunned by this optimism, but no more. I’ve taken Kurzweil’s advice, and think of information technology’s progress exponentially not linearly.

But now, when you next find yourself in a room full of people deeply invested in AGI research, for a lively time assert, “AGI
will never
be achieved! It’s just too hard.” Goertzel, for example, responded to this by looking at me as if I’d started preaching intelligent design. A sometime mathematics professor, like Vinge, Goertzel draws lessons for AI’s future from the history of calculus.

“If you look at how mathematicians did calculus before Isaac Newton and Gottfried Leibnitz, they would take a hundred pages to calculate the derivative of a cubic polynomial. They did it with triangles, similar triangles and weird diagrams and so on. It was oppressive. But now that we have a more refined theory of calculus any idiot in high school can take the derivative of a cubic polynomial. It’s easy.”

As calculus did centuries ago, AI research will incrementally proceed until ongoing practice leads to the discovery of new theoretical rules, ones that allow AI researchers to condense and abstract a lot of their work, at which point progress toward AGI will become easier and faster.

“Newton and Leibnitz developed tools like the sum rule, the product rule, the chain rule, all these basic rules you learn in Calculus 1,” he went on. “Before you had those rules you were doing every calculus problem from scratch, and it was tremendously harder. So with the mathematics of AI we’re at the level of doing calculus before Newton and Leibnitz—so that proving even really simple things about AI takes an insane amount of ingenious calculations. But eventually we’ll have a nice theory of intelligence, just like we now have a nice theory of calculus.”

But not having a nice theory isn’t a deal breaker.

Goertzel says, “It may be that we need a scientific breakthrough in the rigorous theory of general intelligence before we can engineer an advanced AGI system. But I presently suspect that we don’t. My current opinion is that it should be possible to create a powerful AGI system via proceeding step-by-step from the current state of knowledge—doing engineering without a fully rigorous understanding of general intelligence.” As we’ve discussed, Goertzel’s OpenCog project organizes software and hardware into a “cognitive architecture” that simulates what the mind does. And this architecture may become a powerful and perhaps unpredictable thing. Somewhere along its development path before a comprehensive theory of general intelligence is born, Goertzel claims, OpenCog may reach AGI.

Sound crazy? The magazine
New Scientist
proposed that the University of Memphis’ LIDA, a system we discussed in chapter 11 that’s similar to OpenCog, shows signs of rudimentary
consciousness.
Broadly speaking, LIDA’s governing principle, called the Global Workspace Theory, holds that in humans perceptions fed by the senses percolate in the unconscious until they’re important enough to be broadcast throughout the brain. That’s consciousness, and it can be measured by simple awareness tasks, such as pushing a button when a light turns green. Though she used a “virtual” button, LIDA scores like a human when tested on these tasks.

With technologies like these, Goertzel’s wait-and-see approach seems risky to me. It hints at the creation of what I’ve already described—strong machine intelligence that is similar to a human’s, but not human equivalent, and a lot less knowable. It suggests surprise, as if an AGI could one day just show up, leaving us insufficiently prepared for “normal” accidents, and certainly lacking safeguards like formal, Friendly AI. It’s kind of like saying, “If we walk long enough in the woods we’ll find the hungry bears.” Eliezer Yudkowsky has similar fears. And like Goertzel, he doesn’t think software complexity will stand in the way.

“AGI is a problem the brain is a cure for,” he told me. “The human brain can do it—it can’t be that complicated. Natural selection is stupid. If natural selection can solve the AGI problem, it cannot be that hard in an absolute sense. Evolution coughed up AGI easily by randomly changing things around and keeping what worked. It followed an incremental path with no foresight.”

Yudkowsky’s optimism about achieving AGI starts with the idea that human-level intelligence has been achieved by nature once, in humans. Humans and chimpanzees had a common ancestor some five million years ago. Today human brains are four times the size of chimp brains. So, taking about five million years, “stupid” natural selection led to the incremental scaling up of brain size, and a creature much more intelligent than any other.

With focus and foresight, “smart” humans should be able to create intelligence at a human level much faster than natural selection.

But again, as Yudkowsky cites, there’s a giant, galaxywide problem if someone achieves AGI before he or other researchers figure out
Friendly
AI or some way to reliably control AGI. If AGI comes about from incremental engineering in a fortuitous intersection of effort and accident, as Goertzel proposes, isn’t an intelligence explosion likely? If AGI is self-aware and self-improving, as we’ve defined it, won’t it endeavor to fulfill basic drives that may be incompatible with our survival, as we discussed in chapters 5 and 6? In other words, isn’t AGI unbound likely to kill us all?

“AGI is the ticking clock,” said Yudkowsky, “the deadline by which we’ve got to build Friendly AI, which is harder. We need Friendly AI. With the possible exception of nanotechnology being released upon the world, there is just nothing in that whole catalogue of disasters that is comparable to AGI.”

Of course tensions arise between AI theorists such as Yudkowsky and AI makers such as Goertzel. While Yudkowsky argues that creating AGI is a catastrophic mistake unless it’s provably friendly, Goertzel wants to develop AGI as quickly as possible, before fully automated infrastructure makes it easier for an ASI to seize control. Goertzel has received e-mails, though not from Yudkowsky or his colleagues, warning that if he proceeds with developing AGI that isn’t provably safe, he’s “committing the Holocaust.”

But here’s the paradox. If Goertzel gave up pursuing AGI and devoted his life to advocating that everyone else stop too, it would matter not a whit. Other companies, governments, and universities would plow ahead with their research. For this very reason, Vinge, Kurzweil, Omohundro, and others believe relinquishment, or giving up the pursuit of AGI, is not a viable option. In fact, with so many reckless and dangerous nations on the planet—North Korea and Iran for example—and organized crime in Russia and state-sponsored criminals in China launching wave upon wave of next gen viruses and cyberattacks, relinquishment would simply cede the future to crackpots and gangsters.

A defensive strategy more likely to win our survival is one that Omohundro has already begun: a complete science for understanding and controlling self-aware, self-improving systems, that is, AGI and ASI. And because of the challenges of developing an antidote like Friendly AI
before
AGI has been created, development of that science must happen roughly in tandem. Then, when AGI comes into being, its control system already exists. Unfortunately for all of us, AGI researchers have a huge lead, and as Vernor Vinge says, a global economic wind fills their sails.

Other books

The Crown Of Yensupov (Book 3) by C. Craig Coleman
A Fateful Wind by Stone, Suzette
The Never List by Koethi Zan
Can't Stop Loving You by Peggy Webb
Affirmation by Sawyer Bennett
Blood Ties by Quincy J. Allen