Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (16 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
9.46Mb size Format: txt, pdf, ePub

Computing is undergoing the most remarkable transformation since the invention of the PC. The innovation of the next decade is going to outstrip the innovation of the past three combined.

—Paul Otellini, CEO of Intel

With his books
The Age of Spiritual Machines: When Computers Exceed Human Intelligence
and
The Singularity Is Near
, Ray Kurzweil commandeered the word “singularity” and changed its meaning to that of a bright, hopeful period of human history, which his tools of extrapolation let him see with remarkable precision. Sometime in the next forty years, he writes, technological development will advance so rapidly that human existence will be fundamentally altered, the fabric of history torn. Machines and biology will become indistinguishable. Virtual worlds will be more vivid and captivating than reality. Nanotechnology will enable manufacturing on demand, ending hunger and poverty, and delivering cures for all of mankind’s diseases. You’ll be able to stop your body’s aging, or even reverse it. It’s the most important time to be alive not just because you will witness a truly stupefying pace of technological transformation, but because the technology promises to give you the tools to live forever. It’s the dawn of a “singular” era.

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself …

Consider J. K. Rowling’s Harry Potter stories from this perspective. These tales may be imaginary, but they are not unreasonable visions of our world as it will exist only a few decades from now. Essentially all of the Potter “magic” will be realized through the technologies I will explore in this book. Playing quidditch and transforming people and objects into other forms will be feasible in full-immersion virtual-reality environments, as well as in real reality, using nanoscale devices.

So, the singularity will be “neither utopian or dystopian” but we’ll get to play quidditch! Obviously, Kurzweil’s Singularity is dramatically different from Vernor Vinge’s singularity and I. J. Good’s intelligence explosion. Can they be reconciled? Is it simultaneously the best time to be alive, and the worst? I’ve read almost every word Kurzweil has published, and listened to every available audio recording, podcast, and video. In 1999 I interviewed him at length for a documentary film that was in part about AI. I know what he’s written and said about the dangers of AI, and it isn’t much.

Surprisingly, however, he was indirectly responsible for the subject’s most cogent cautionary essay—Bill Joy’s “Why the Future Doesn’t Need Us.” In it, Joy, a programmer, computer architect, and the cofounder of Sun Microsystems, urges a slowdown and even a halt to the development of three technologies he believes are too deadly to pursue at the current pace: artificial intelligence, nanotechnology, and biotechnology. Joy was prompted to write it after a frightening conversation in a bar with Kurzweil, followed by his reading
The Age of Spiritual Machines.
In nonscholarly literature and lectures about the perils of AI, I think only Asimov’s Three Laws are cited more frequently, albeit misguidedly, than Joy’s hugely influential essay. This paragraph sums up his position on AI:

But now, with the prospect of human-level computing power in about thirty years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable. Having struggled my entire career to build reliable software systems, it seems to me more than likely that this future will not work out as well as some people may have imagined. My personal experience suggests we tend to overestimate our design abilities. Given the incredible power of these new technologies, shouldn’t we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn’t we proceed with great caution?

Kurzweil’s bar talk can start a national dialogue, but his few cautionary words get lost in the excitement of his predictions. He insists he’s not painting a utopian view of tomorrow, but I don’t think there’s any doubt that he is.

Few write more knowledgably or persuasively about technology than Kurzweil—he takes pains to make himself clearly understood, and he defends his message with humility. However, I think he has made a mistake by appropriating the name “singularity” and giving it a new, rosy meaning. So rosy that I, like Vinge, find the definition scary, full of compelling images and ideas that mask its danger. His rebranding underplays AI’s peril and overinflates the promise. Starting from a technological proposition, Kurzweil has created a cultural movement with strong religious overtones. I think mixing technological change and religion is a big mistake.

Imagine a world where the difference between man and machine blurs, where the line between humanity and technology fades, and where the soul and the silicon chip unite … In [Kurzweil’s] inspired hands, life in the new millennium no longer seems daunting. Instead, Kurzweil’s twenty-first century promises to be an age in which the marriage of human sensitivity and artificial intelligence fundamentally alters and improves the way we live.

Kurzweil’s not just the godfather of Singularity issues, a polite, relentless debater, and a tireless if rather mechanical promoter. He’s the den-master for a lot of young men, and some women, living on the singularity edge. Singularitarians tend to be twenty- and thirty-somethings, male, and childless. For the most part, they’re smart white guys who’ve heard the call of the Singularity. Many have answered by dropping the kinds of careers that would’ve made their parents proud to take on monkish lives committed to Singularity issues. A lot are autodidacts, probably in part because no undergraduate program offers a major in computer science, ethics, bioengineering, neuroscience, psychology, and philosophy, in short, Singularity studies. (Kurzweil cofounded Singularity University, which offers no degrees and isn’t accredited. But it promises “a broad, cross-disciplinary understanding of the biggest ideas and issues in transformative technologies.”) Many Singularitarians are too smart and self-directed to get in line for traditional education anyway. And many are addled wing nuts few colleges or universities would invite on campus.

Some Singularitarians have adopted rationality as the major tenet of their creed. They believe that greater logical and reasoning abilities, particularly among tomorrow’s decision makers, decreases the probability that we’ll commit suicide by AI. Our brains, they argue, are full of bizarre biases and heuristics that served us well during our evolution, but get us into trouble when confronted with the modern world’s complex risks and choices. Their main focus isn’t on a catastrophic, negative Singularity, but a blissful, positive one. In it, we can take advantage of life-extending technologies that let us live on and on, probably in mechanical rather than biological form. In other words, cleanse yourself of faulty thinking, and you can find deliverance from the world of the flesh, and discover life everlasting.

It’s no surprise that the Singularity is often called the Rapture of the Geeks—as a movement it has the hallmarks of an apocalyptic religion, including rituals of purification, eschewing frail human bodies, anticipating eternal life, and an uncontested (somewhat) charismatic leader. I wholeheartedly agree with the Singularitarian idea that AI is the most important thing we could be thinking about right now. But when it comes to immortality talk, I get off the bus. Dreams about eternal life throw out a powerful distortion field. Too many Singularitarians believe that the confluence of technologies presently accelerating will not yield the kinds of disasters we might anticipate from any of them individually, nor the conjunctive disasters we might also foresee, but instead will do something 180 degrees different. It will save mankind from the thing it fears most. Death.

But how can you competently evaluate tools, and whether and how their development should be regulated, when you believe the same tools will permit you to live forever? Not even the world’s most rational people have a magical ability to dispassionately evaluate their own religions. And as scholar William Grassie argues, when you are asking questions about transfiguration, a chosen few, and living forever, what are you taking about if not religion?

Will the Singularity lead to the supersession of humanity by spiritual machines? Or will the Singularity lead to the transfiguration of humanity into superhumans who live forever in a hedonistic, rationalist paradise? Will the Singularity be preceded by a period of tribulation? Will there be an elect few who know the secrets of the Singularity, a vanguard, perhaps a remnant who make it to the Promised Land? These religious themes are all present in the rhetoric and rationalities of the Singularitarians, even if the pre- and post-millennialist interpretations aren’t consistently developed, as is certainly the case with pre-scientific Messianic movements.

Unlike Good’s and Vinge’s takes on the accelerating future, Kurzweil’s Singularity isn’t brought about by artificial intelligence alone, but by three technologies advancing to points of convergence—genetic engineering, nanotechnology, and robotics, a broad term he uses to describe AI. Also unlike Good and Vinge, Kurzweil has come up with a unified theory of technological evolution that, like any respectable scientific theory, tries to account for observable phenomena, and makes predictions about future phenomena. It’s called the Law of Accelerating Returns, or LOAR.

First, Kurzweil proposes that a smooth exponential curve governs evolutionary processes, and that the development of technology is one such evolutionary process. Like biological evolution, technology evolves a capability, then uses that capability to evolve to the next stage. In humans, for instance, big brains and opposable thumbs allowed toolmaking and the power grip needed to use our tools effectively. In technology, the printing press contributed to bookbinding, literacy, and to the rise of universities and more inventions. The steam engine leveraged the Industrial Revolution and more and more inventions.

Because of its way of building upon itself, technology starts slow, but then its growth curve steepens until it shoots upward almost vertically. According to Kurzweil’s trademark graphs and charts, we are entering the most critical period of technological evolution, the steep upward part, the “knee of the exponential curve.” It’s all up from here.

Kurzweil developed his Law of Accelerating Returns to describe the evolution of any process in which patterns of information evolve. He applies LOAR to biology, which favors increasing molecular order, but it is more convincing when used to anticipate the pace of change in information technologies, including computers, digital cameras, the Internet, cloud computing, medical diagnostic and treatment equipment, and more—any technology involved in the storage and retrieval of information.

As Kurzweil notes, LOAR is fundamentally an economic theory. Accelerating returns are fueled by innovation, competition, market size—the features of the marketplace and manufacturers. In the computer market the effect is described by Moore’s law, another economic theory disguised as a technology theory, first observed in 1965 by Intel’s cofounder Gordon Moore.

Moore’s law states that the number of transistors that can be put on an integrated circuit to build a microprocessor doubles every eighteen months. A transistor is an on/off switch that can also amplify an electrical charge. More transistors equals more processing speed, and faster computers. Moore’s law means computers will get smaller, more powerful, and cheaper at a reliable rate. This does not happen because Moore’s law is a natural law of the physical world, like gravity, or the Second Law of Thermodynamics. It happens because the consumer and business markets motivate computer chip makers to compete and contribute to smaller, faster, cheaper computers, smart phones, cameras, printers, solar arrays, and soon, 3-D printers. And chip makers are building on the technologies and techniques of the past. In 1971, 2,300 transistors could be printed on a chip. Forty years, or twenty doublings later, 2,600,000,000. And with those transistors, more than two million of which could fit on the period at the end of this sentence, came increased speed.

Here’s a dramatic case in point. Jack Dongarra, a researcher at Tennessee’s Oak Ridge National Lab and part of a team that tracks supercomputer speed, determined that Apple’s best-selling tablet, the iPad 2, is as fast as a circa 1985 Cray 2 supercomputer. In fact, running at over 1.5 gigaflops (one gigaflop equals one
billion
mathematical operations, or calculations per second) the iPad 2 would have made the list of the world’s five hundred fastest supercomputers as late as 1994.

In 1994, who could have imagined that less than a generation later a supercomputer smaller than a textbook would be economical enough to be given free to high school students, and what’s more, it would connect to the sum of mankind’s knowledge, sans cables? Only Kurzweil would’ve been so bold, and while he didn’t make this exact claim about supercomputers, he did anticipate the Internet’s explosion.

In information technologies, each breakthrough pushes the next breakthrough to occur more quickly—the curve we talked about gets steeper. So when considering the iPad 2 the question isn’t what we can expect in the
next
fifteen years. Instead, look out for what happens in a fraction of that time. By about 2020, Kurzweil estimates we’ll own laptops with the raw processing power of human brains, though not the intelligence.

Let’s have a look at how Moore’s law may apply to the intelligence explosion. If we assume AGI can be attained, Moore’s law implies the recursive self-improvement of an intelligence explosion may not even be necessary to achieve ASI, or superhuman intelligence. That’s because once you’ve achieved AGI, less than two years later machines of human-level intelligence will have doubled in speed. Then in under two years, another doubling. Meanwhile, average human intelligence will remain the same. Soon the AGI has left us behind.

Other books

Murder in a Good Cause by Medora Sale
Love Above All by Speer, Flora
My Lord Rogue by Katherine Bone
Candidate Four by Crystal Cierlak
Bitter of Tongue by Cassandra Clare, Sarah Rees Brennan
Volcano by Gabby Grant