Who Owns the Future? (28 page)

Read Who Owns the Future? Online

Authors: Jaron Lanier

Tags: #Future Studies, #Social Science, #Computers, #General, #E-Commerce, #Internet, #Business & Economics

BOOK: Who Owns the Future?
7.81Mb size Format: txt, pdf, ePub

The core ideal of the Internet is that one trusts people, and that given an opportunity, people will find their way to be reasonably decent. I happily restate my loyalty to that ideal. It’s all we have.

But the demonstrated capability of Facebook to effortlessly engage in mass social engineering proves that the Internet as it exists today is not a purists’ emergent system, as is so often claimed, but largely a top-down, directed one. There can be no sweeter goal of social engineering than increasing organ donations, and yet the extreme good of the precedent says nothing about the desirability of its inheritance.

We pretend that an emergent meta-human being is appearing in the computing clouds—an artificial intelligence—but actually it is humans, the operators of Siren Servers, pulling the levers.

THE GLOBAL TRIUMPH OF TURING’S HUMOR

The news of the day often includes an item about recent developments in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools. But such conclusions aren’t just changing how we think about computers—they are reshaping the basic assumptions of our lives in misguided and ultimately damaging ways.

The nuts and bolts of artificial-intelligence research can often be more usefully interpreted without the concept of AI at all. For example, in 2011, IBM scientists unveiled a “question answering” machine that is designed to play the TV quiz show
Jeopardy
. Suppose IBM had dispensed with the theatrics, and declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained IBM’s team as much (deserved) recognition as the claim of an artificial intelligence, but it would also have educated the public about how such a technology might actually be used most effectively.

AI technologies typically operate on a variation of the process described earlier that accomplishes translations between languages. While innovation in algorithms is vital, it is just as vital to feed algorithms with “big data” gathered from ordinary people. The supposedly artificially intelligent result can be understood as a mash-up of what real people did before. People have answered a lot of questions before, and a multitude of these answers are gathered up by the algorithms and regurgitated by the program. This in no way denigrates it or proposes it isn’t useful. It is not, however, supernatural. The real people from whom the initial answers were gathered deserve to be paid for each new answer given by the machine.

Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an AI.” While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost. Needless to say, this approach would hide its tracks so that it would be hard to send a nanopayment to an author who had been aggregated.

What all this comes down to is that the very idea of artificial intelligence gives us the cover to avoid accountability by pretending that machines can take on more and more human responsibility. This holds for things that we don’t even think of as artificial intelligence, like the recommendations made by Netflix and Pandora. Seeing movies and listening to music suggested to us by algorithms is relatively harmless, I suppose. But I hope that once in a while the users of those services resist the recommendations; our exposure to art shouldn’t be hemmed in by an algorithm that we merely want to believe predicts our tastes accurately. These algorithms do not represent emotion or meaning, only statistics and correlations.

What makes this doubly confounding is that while Silicon Valley might sell artificial intelligence to consumers, our industry certainly wouldn’t apply the same automated techniques to some of its own work. Choosing design features in a new smartphone, say, is considered too consequential a game. Engineers don’t seem quite ready to believe in their smart algorithms enough to put them up against Apple’s late chief executive, Steve Jobs, or some other person with a real design sensibility.

But the rest of us, lulled by the concept of ever-more intelligent AIs, are expected to trust algorithms to assess our aesthetic choices, the progress of a student, the credit risk of a homeowner or an institution. In doing so, we only end up misreading the capability of our machines and distorting our own capabilities as human beings. We must instead take responsibility for every task undertaken by a machine and double-check every conclusion offered by an algorithm, just as we always look both ways when crossing an intersection, even though the signal has been given to walk.

When we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on—with the machines and with ourselves. So, why, aside from the theatrical appeal to consumers and reporters, must engineering results so often be presented in Frankensteinian light?

The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: One day in the not-so-distant future, the Internet will suddenly coalesce into a superintelligent AI, infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists.

It should go without saying that we can’t count on the appearance of a soul-detecting sensor that will verify that a person’s consciousness has been virtualized and immortalized. There is certainly no such sensor with us today to confirm metaphysical ideas about people. All thoughts about consciousness, souls, and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture.

What I would like to point out, though, is that a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity—whether it’s the distrust among Islamic or Christian fundamentalists of the scientific worldview, or even the discomfort that often greets progress in fields like climate change science or stem-cell research.

If technologists are creating their own ultramodern religion, and it is one in which people are told to wait politely as their very souls are made obsolete, we might expect further and worsening tensions. But if technology were presented without metaphysical baggage, is it possible that modernity would make people less uncomfortable?

Technology is essentially a form of service. Technologists work to make the world better. Our inventions can ease burdens, reduce poverty and suffering, and sometimes even bring new forms of beauty into the world. We can give people more options to act morally, because people with medicine, housing, and agriculture can more easily afford to be kind than those who are sick, cold, and starving.

But civility, human improvement, these are still choices. That’s why scientists and engineers should present technology in ways that don’t confound those choices.

We serve people best when we keep our religious ideas out of our work.

DIGITAL AND PRE-DIGITAL THEOCRACY

People must not be gradually equated with machines if we are to engineer a world that is good for people. We must not allow technological change to be driven by a philosophy in which people aren’t held to be special. But what
is
special about people? Must we accept a metaphysical or supernatural principle to acknowledge ourselves?

This book will culminate with a prospectus for what I’m calling “humanistic information economics.” Humanism might include a tolerance of some form of dualism. Dualism means there isn’t just one plane of reality. To some people it might mean that there’s a separate spiritual realm, or an afterlife, but to me it just means that neither physical reality nor logic explains everything. Being a skeptical dualist means walking a tightrope. Fall to the left and you acquiesce to superstitions. To the right lies the trap of sloppy reductionism.

Dualism suggests a difference between people and even very advanced machines. When children learn to translate between languages or answer questions, they also nurture assets such as context, taste, and moral feeling that our machine inventions cannot originate, but only mash-up.

Many technologist friends tell me that they think that I am clinging to a sentimental and arbitrary distinction. My reasons are both based on a commitment to the truth and to pragmatism (the survival of liberty—for people).

Belief in the specialness of people is a minority position in the tech world, and I would like that to change. The way we experience life—call it “consciousness”—doesn’t fit in a materialistic or informational worldview. Lately I prefer to call it “experience,” since the opposing philosophical team has colonized the term
consciousness.
That term might be used these days to refer to the self-models that can be implemented inside a robot.

WHAT IS EXPERIENCE?

If we wish to ask what “experience” is, we can frame it as the question “What would be different if it were absent from our world?”

If personal experience were missing from the universe, how would things be different? A range of answers is possible. One is that nothing would be different, because consciousness was just an illusion in the first place. (However, I would point out that consciousness is the one thing that isn’t reduced if it’s an illusion.)

Another answer is that the whole universe would disappear because it needed consciousness. That idea was characteristic of followers of the physicist John Archibald Wheeler’s early work. He once seemed to believe that consciousness plays a role in keeping things afloat by taking the role of the quantum observer in certain quantum-scale interactions.

Yet another answer would be that a consciousness-free version of our universe would be similar but not identical, because people would get a little duller. That would be the approach of certain cognitive scientists, suggesting that consciousness plays a specific, but limited practical function in the brain.

But then there’s another answer. If consciousness were not present, the trajectories of all the particles would remain identical. Every measurement you could make in the universe would come out identically. However, there would be no “gross,” or everyday objects. There would be neither apples nor houses, nor brains to perceive them. Neither would there be words or thoughts, though the electrons and chemical bonds that would otherwise comprise them in the brain would remain just the same as before.

There would only be the particles that make up things, in exactly the same positions they would otherwise occupy, but not the things. In other words, consciousness provides ontology for particles. If there were no consciousness, the universe would be adequately described as being nothing but particles. Or, if you prefer a computational framework, only the bits would be left, but not the data structures. It would all mean nothing, because it wouldn’t be experienced.

The argument can become more complicated, in that there are limited information bandwidths between different levels of description in the material world, so that one might identify dynamics at a gross level that could not be described by particle interactions. But the grosser a process is, the more it becomes subject to differing interpretations by observers. In a minimal quantum system, only a limited variety of measurements can be made, so while there can be arguments over interpretation, there can be less argument about phenomenology. In a big system, that isn’t the case. Which economic indicators are substantial? There’s no consensus.

The point is that one goes round and round trying to get rid of an experiencing observer in an attempt to describe the universe we experience, and it is inherently impossible to verify that projects of that kind have been completed.

That is why I don’t think reason can definitively resolve disputes about whether people are “special.” These kinds of arguments recall Kantian attempts to use reason to prove or disprove the existence of God. Whether the argument is about people or God, the moves are roughly the same. So I can’t prove that people are special, and no one can prove the contrary, either, but I can argue that it’s a better bet to presume we are special, for little might be lost and much might be gained by doing so.

Other books

Death Ray by Craig Simpson
Magic of Thieves by C. Greenwood
VAIN (The VAIN Series) by Deborah Bladon
Revenge of the Cheerleaders by Rallison, Janette
The Killing Lessons by Saul Black
This Was Tomorrow by Elswyth Thane
Bone Magic by Brent Nichols