Read Who Owns the Future? Online
Authors: Jaron Lanier
Tags: #Future Studies, #Social Science, #Computers, #General, #E-Commerce, #Internet, #Business & Economics
But this humor doesn’t have to be violent. I embrace it and practice it myself in a lightened form, which could be called homeopathic. Just about every technologist I know harbors some Rousseauian fetish in the closet. The same fellow who might work on “Augmented Wilderness,” a technology in which a virtual world is perceived to be superimposed on a remote wilderness trail, will seek out the wild primitivist side of Silicon Valley rituals like Burning Man. The room where I am writing this is filled with rare, archaic, acoustic musical instruments that I have learned to play. I find that digital ways of making music are missing something and I will not let go of that thing. This is entirely reasonable.
Is there really something essential and vital about acoustic instruments that computers can’t touch? Another incarnation of Pascal’s bargain presents itself. I don’t really know, but the cost of holding on to my perception of a difference is manageable, while the cost if I let go might be great, even if the resulting amnesia would hide the loss from me.
CAN WE HANDLE OUR OWN POWER?
Thomas Malthus articulated fear of an apocalypse in a naturalistic framework instead of the established supernatural ones. The future he dreaded from the perspective of the 18th century was one where our own successes grant us gifts we cannot absorb, leading to catastrophe.
In a typical Malthusian scenario, agriculture, public health, medicine, and industrialization enable an unsustainable population explosion, which leads to catastrophic famine. Our beloved technological achievements continue to seduce us even as they lead us to destruction.
Since Malthus, there have been endless replays of the “population bomb” motif, as Paul Erlich dubbed it in the 1960s. A documentary called
Surviving Progress,
1
based on a book called
A Short History of Progress,
2
puts it this way: “We’re now reaching a point at which technological progress threatens the very existence of humanity.”
A wide variety of Icarusian fates for mankind are never far from our thoughts. Global climate change is a principal example of the moment. Another is the prospect of weapons of mass destruction in the hands of terrorists. One could also mention viruses given flight by the jet age, the prospect that we’ll be bathed in the radiation of nuclear power when the oil runs out, and so on. Some respected technologists have publicly worried that the descendants of our computers might eat us later in this century.
Malthusian scenarios are often not just terrifying, but cruel in their irony. Industrialized, educated populations often face a population anti-bomb these days: a depopulation spiral. This is when there aren’t enough children being born to maintain the population, and balance the burden of an age wave. Japan’s situation was described earlier. Korea, Italy, and many other countries are also experiencing profound depopulation spirals. It is the “less modern” parts of the world that power population explosions.
The threats of global warming, terrorism, and the rest are very real, but not in a surprising or unnatural way. It is wholly natural that, as we humans gain more and more influence over our fates, we accrue an ever-greater variety of ways to commit mass suicide.
An analogy would be an individual learning to drive a car. Anyone who learns to drive has the power to kill himself at any moment. In fact, many do. And yet, most of us accept the risk and responsibility of driving, and for the most part manage to enjoy the power and fun available to us through cars.
In a similar way, on a global scale, it is inevitable that our survival will be in our own hands in more and more ways as technology progresses. While global climate change is in my opinion real, and scary, it is also an inevitable species-wide rite of passage.
*
It is just one of many that we will have to meet with expertise and cunning, and perhaps with the occasional self-manipulative incantation of optimism.
*
Later in the book, when solutions are proposed, we will consider how network architecture might be tweaked to make it easier to confront big challenges like global climate change.
This is not an easy thing to say, so it isn’t said very often. We cannot make the world better through expertise without also creating more and more means for people to destroy the world. Expertise is expertise.
That doesn’t mean increasing expertise is inherently self-defeating! It is better to have more of a say in our fate, even if that means we must trust ourselves. Growing up is good. What is gained is greater than what is lost. There’s a natural lure to believe that the state of humankind before technologists mucked with it was secure and comfortable. Technologists remember that it was not.
The only reason a less transformed world can be imagined as a safer one is that infant mortality and other tragedies used to constitute a constant, “natural” catastrophe. Death tolls were usually so well paid up in advance that Malthusian dangers were mooted. The elevation of the human story from constant catastrophe is one and the same with the rise of technological ability.
Yes, the benefits of technology always have catches. Every technological advance in our adventure up to the present has had side effects. Every medicine is also a poison, and every new source of food is a famine in waiting. Humans consistently demonstrated an ability to use ancient innovations in agriculture, fuel, and construction to deforest regions and destroy local environments. Jared Diamond and others have documented how human societies have repeatedly undermined themselves. We have been obliged to invent our way out of the mess caused by our last inventions since we became human. It is our identity.
The answer to climate change can’t be halting or reversing events. The earth is not a linear system, like a video clip, that can be played in forward or reverse. After we learn how to survive global climate change, the earth will not be the same place it was before. It will be more artificial, more managed.
That is not anything new. It is nothing more than another stage in the adventure that started when Eve bit the apple, which we can also think of as Newton’s apple. (Not to mention Turing’s apple.)
But no one wants to hear that. It is hard to be comfortable accepting the degree of responsibility our species will have to assume in order to survive into the future. The game was entered into long ago and we have no choice but to play.
THE FIRST HIGH-TECH WRITER
It can be a little deflating to realize how much of the present-day conversation about economic systems, technology, and personhood was already well worn in the century before last. “The Ballad of John Henry” was one of the best-known songs of the 19th century. Our John was an apocryphal railroad worker who is said to have competed with a railroad-building machine and won, only to drop dead from exhaustion. Productivity was fatal. The late 19th century was already dominated by anxieties of human obsolescence.
The original Luddites were early 19th century textile workers worried about being made obsolete by improved looms. Just as Aristotle foresaw! Their story was not pretty. They gathered into violent mobs and were punished in public executions.
In material terms, life as a factory worker was better than that of a peasant. So the Luddites were often doing better than their ancestors. And yet their good fortune was terrifyingly fragile. Moment-to-moment loss of personal control when one worked in a factory might have enhanced Luddite anxiety, just as we sometimes fear being locked into a plane more than we fear driving in a car, even though the car is usually more dangerous. Something about becoming part of someone else’s machine was terrifying on a fundamental level.
We have never overcome that anxiety. During the Great Depression, in the 1930s, one of the clichés of the popular press was that robots were coming to take away any jobs that might appear. There were popular stories of robots supposedly killing their makers and robots about to challenge human champions in boxing rings.
3
These old paranoias are typically exhumed these days in order to make the case that there’s nothing to worry about. “See, in the old days they worried that technology would make people obsolete and it didn’t happen. Similar worries today are just as silly.”
To that I say, “I agree completely that the fears were wrong then and wrong today, in terms of what’s actually true. People are and will always be needed. The question is whether we’ll engage in complete enough accounting so that people are honestly valued. If there’s ever an illusion that humans are becoming obsolete, it will in reality be a case of massive accounting fraud. What we’re doing now is initiating that fraud. Let’s stop.”
But back in the 19th century, people weren’t thinking of the world as information yet, and the robots of our imaginations were brawny, gunning for blue-collar jobs. Two huge streams of culture and argument that continue to underlie many of today’s conversations were incubated by robot anxiety: the “left” and science fiction.
We find a hatching of the left in the early writings of Karl Marx, who as early as the 1840s was obsessed with the Luddite dilemma. Marx was one of the first technology writers. This realization came to me in a flash many years ago when I was driving in Silicon Valley and some Internet startup was on the radio trumpeting the latest scheme to take over the world. There was a lot of the usual filler about innovation breaking through traditional market boundaries, the globalization of technical talent, and so on. I was just about to turn the radio off, muttering something about how I couldn’t take even one more pitch from one of these companies, when the announcer intoned, “This has been an anniversary reading of
Das Kapital.
” I had been listening to the lefty station, KPFA, without realizing it.
I’m no Marxist. I love competing in the market, and the last thing I’d want is to live under communism. My wife grew up with it in Minsk, Belarus, and I am absolutely, thoroughly convinced of the misery. But if you select the right passages, Marx can read as being incredibly current.
Every thoughtful technologist has probably gone through a period of self-doubt over Luddite scenarios. The damage to careers by technological progress is not uniformly distributed among people. If you wait long enough, anyone might potentially be vulnerable to playing the role of Luddite, even if it only happens to certain unlucky people at any given moment. Technological change is unfair, at least in the short term. Can we live with that unfairness?
The reason most technologists can sleep at night is that the benefits of technological progress do seem to eventually benefit everyone rapidly enough to keep the world from exploding or imploding. New jobs appear along with new technologies, even as old ones are destroyed. The descendants of the Luddites are with us today, and work as stockbrokers, personal trainers, and computer programmers. But lately, their adult children are still living at home. Has the chain been broken?
Neither training nor prestige insulates people from the potential to fall prey to the fate of the Luddites. Robotic pharmacists and “artificially intelligent” software performing legal research previously done by human lawyers have both already been shown to be cost-effective,
4
and we’re still very early in the process. The only position at all that is safe is to be the proprietor of a top node on the network. And even that role cannot stand if it is to be the only secure human role.
Marx also described a subtler problem of “alienation,” a sense that one’s imprint on the world is not one’s own anymore when one is part of someone else’s scheme in a high tech factory. Today there is a great deal of concern about the authenticity and vitality of life lived online. Are “friends” really friends? These concerns are an echo of Marx, almost two centuries later, as information becomes the same thing as production.
MEANING IN STRUGGLE
H. G. Wells’s science fiction novel
The Time Machine,
published in 1895, foresees a future in which mankind has split into two species, the Eloi and the Morlocks. Each survives in the ruins of a civilization that had been trapped in Marx’s nightmare and collapsed. What was once a divide between rich and poor evolved into a split between species, and the character of each was debased. The Eloi, descended from the poor, were docile, while the Morlocks, descended from the rich, were decadent and ultimately just as debased.
The Morlocks could have descended from today’s social network or hedge fund owners, while the ancestors of the Eloi undoubtedly felt lucky initially, as free tools helped them crash on each other’s couches more efficiently. What is intriguing about Wells’s vision is that members of both species become undignified, lesser creatures. (Morlocks eat Eloi, which is about as far as one can go in rejecting empathy and dignity.)
When science fiction turns dark, as in
The Time Machine,
or the works of Philip K. Dick or William Gibson, it is usually because people have been rendered absurd by technological advancement. When science fiction has a sunny outlook it is because heroes are making themselves human by struggling successfully against human obsolescence.
The struggle might be against aliens (
War of the Worlds
), plain old evil (
Star Wars
), or artificial intelligence, as in
2001: A Space Odyssey, The Matrix, The Terminator, Battlestar Galactica,
and many more. In all cases, science fiction is fundamentally retro, in that it re-creates the setting of early human evolution, when human character was first formed in a setting where meaning was inseparable from survival.