Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (19 page)

BOOK: Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100
6.54Mb size Format: txt, pdf, ePub

Consciousness,
unfortunately, is a buzzword that means different things to different people. Sadly, there is no universally accepted definition of the term.

I personally think that one of the problems has been the failure to clearly define consciousness and then a failure to quantify it.

But if I were to venture a guess, I would theorize that consciousness consists of at least three basic components:

1. sensing and recognizing the environment
2. self-awareness
3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy

In this approach, even simple machines and insects have some form of consciousness, which can be ranked numerically on a scale of 1 to 10. There is a continuum of consciousness, which can be quantified. A hammer cannot sense its environment, so it would have a 0 rating on this scale. But a thermostat can. The essence of a thermostat is that it can sense the temperature of the environment and act on it by changing it, so it would have a ranking of 1. Hence, machines with feedback mechanisms have a primitive form of consciousness. Worms also have this ability. They can sense the presence of food, mates, and danger and act on this information, but can do little else. Insects, which can detect more than one parameter (such as sight, sound, smells, pressure, etc.), would have a higher numerical rank, perhaps a 2 or 3.

The highest form of this sensing would be the ability to recognize and understand objects in the environment. Humans can immediately size up their environment and act accordingly and hence rate high on this scale. However, this is where robots score badly. Pattern recognition, as we have seen, is one of the principal roadblocks to artificial intelligence. Robots can sense their environments much better than humans, but they do not understand or recognize what they see. On this scale of consciousness, robots score near the bottom, near the insects, due to their lack of pattern recognition.

The next-higher level of consciousness involves self-awareness. If you place a mirror next to most male animals, they will immediately react aggressively, even attacking the mirror. The image causes the animal to defend its territory. Many animals lack awareness of who they are. But monkeys, elephants, dolphins, and some birds quickly realize that the image in the mirror represents themselves and they cease to attack it. Humans would rank near the top on this scale, since they have a highly developed sense of who they are in relation to other animals, other humans, and the world. In addition, humans are so aware of themselves that they can talk silently to themselves, so they can evaluate a situation by thinking.

Third, animals can be ranked by their ability to formulate plans for the future. Insects, to the best of our knowledge, do not set elaborate goals for the future. Instead, for the most part, they react to immediate situations on a moment-to-moment basis, relying on instinct and cues from the immediate environment.

In this sense, predators are more conscious than prey. Predators have to plan ahead, by searching for places to hide, by planning to ambush, by stalking, by anticipating the flight of the prey. Prey, however, only have to run, so they rank lower on this scale.

Furthermore, primates can improvise as they make plans for the immediate future. If they are shown a banana that is just out of reach, then they might devise strategies to grab that banana, such as using a stick. So, when faced with a specific goal (grabbing food), primates will make plans into the immediate future to achieve that goal.

But on the whole, animals do not have a well-developed sense of the distant past or future. Apparently, there is no tomorrow in the animal kingdom. We have no evidence that they can think days into the future. (Animals will store food in preparation for the winter, but this is largely genetic: they have been programmed by their genes to react to plunging temperatures by seeking out food.)

Humans, however, have a very well-developed sense of the future and continually make plans. We constantly run simulations of reality in our heads. In fact, we can contemplate plans far beyond our own lifetimes. We judge other humans, in fact, by their ability to predict evolving situations and formulate concrete strategies. An important part of leadership is to anticipate future situations, weigh possible outcomes, and set concrete goals accordingly.

In other words, this form of consciousness involves predicting the future, that is, creating multiple models that approximate future events. This requires a very sophisticated understanding of common sense and the rules of nature. It means that you ask yourself “what if” repeatedly. Whether planning to rob a bank or run for president, this kind of planning means being able to run multiple simulations of possible realities in your head.

All indications are that only humans have mastered this art in nature.

We also see this when psychological profiles of test subjects are analyzed. Psychologists often compare the psychological profiles of adults to their profiles when they were children. Then one asks the question: What is the one quality that predicted their success in marriage, careers, wealth, etc.? When one compensates for socioeconomic factors, one finds that one characteristic sometimes stands out from all the others: the ability to delay gratification. According to the long-term studies of Walter Mischel of Columbia University, and many others, children who were able to refrain from immediate gratification (e.g., eating a marshmallow given to them) and held out for greater long-term rewards (getting two marshmallows instead of one) consistently scored higher on almost every measure of future success, in SATs, life, love, and career.

But being able to defer gratification also refers to a higher level of awareness and consciousness. These children were able to simulate the future and realize that future rewards were greater. So being able to see the future consequences of our actions requires a higher level of awareness.

AI researchers, therefore, should aim to create a robot with all three characteristics. The first is hard to achieve, since robots can sense their environment but cannot make sense of it. Self-awareness is easier to achieve. But planning for the future requires common sense, an intuitive understanding of what is possible, and concrete strategies for reaching specific goals.

So we see that common sense is a prerequisite for the highest level of consciousness. In order for a robot to simulate reality and predict the future, it must first master millions of commonsense rules about the world around it. But common sense is not enough. Common sense is just the “rules of the game,” rather than the rules of strategy and planning.

On this scale, we can then rank all the various robots that have been created.

We see that Deep Blue, the chess-playing machine, would rank very low. It can beat the world champion in chess, but it cannot do anything else. It is able to run a simulation of reality, but only for playing chess. It is incapable of running simulations of any other reality. This is true for many of the world’s largest computers. They excel at simulating the reality of one object, for example, modeling a nuclear detonation, the wind patterns around a jet airplane, or the weather. These computers can run simulations of reality much better than a human. But they are also pitifully one-dimensional, and hence useless in surviving in the real world.

Today, AI researchers are clueless about how to duplicate all these processes in a robot. Most throw up their hands and say that somehow huge networks of computers will show “emergent phenomena” in the same way that order sometimes spontaneously coalesces from chaos. When asked precisely how these emergent phenomena will create consciousness, most roll their eyes to the heavens.

Although we do not know how to create a robot with consciousness, we can imagine what a robot would look like that is more advanced than us, given this framework for measuring consciousness.

They would excel in the third characteristic: they would be able to run complex simulations of the future far ahead of us, from more perspectives, with more details and depth. Their simulations would be more accurate than ours, because they would have a better grasp of common sense and the rules of nature and hence better able to ferret out patterns. They would be able to anticipate problems that we might ignore or not be aware of. Moreover, they would be able to set their own goals. If their goals include helping the human race, then everything is fine. But if one day they formulate goals in which humans are in the way, this could have nasty consequences.

But this raises the next question: What happens to humans in this scenario?

WHEN ROBOTS EXCEED HUMANS

In one scenario, we puny humans are simply pushed aside as a relic of evolution. It is a law of evolution that fitter species arise to displace unfit species; and perhaps humans will be lost in the shuffle, eventually winding up in zoos where our robotic creations come to stare at us. Perhaps that is our destiny: to give birth to superrobots that treat us as an embarrassingly primitive footnote in their evolution. Perhaps that is our role in history, to give birth to our evolutionary successors. In this view, our role is to get out of their way.

Douglas Hofstadter confided to me that this might be the natural order of things, but we should treat these superintelligent robots as we do our children, because that is what they are, in some sense. If we can care for our children, he said to me, then why can’t we also care about intelligent robots, which are also our children?

Hans Moravec contemplates how we may feel being left in the dust by our robots: “… life may seem pointless if we are fated to spend it staring stupidly at our ultraintelligent progeny as they try to describe their ever more spectacular discoveries in baby talk that we can understand.”

When we finally hit the fateful day when robots are smarter than us, not only will we no longer be the most intelligent being on earth, but our creations may make copies of themselves that are even smarter than they are. This army of self-replicating robots will then create endless future generations of robots, each one smarter than the previous one. Since robots can theoretically produce ever-smarter generations of robots in a very short period of time, eventually this process will explode exponentially, until they begin to devour the resources of the planet in their insatiable quest to become ever more intelligent.

In one scenario, this ravenous appetite for ever-increasing intelligence will eventually ravage the resources of the entire planet, so the entire earth becomes a computer. Some envision these superintelligent robots then shooting out into space to continue their quest for more intelligence, until they reach other planets, stars, and galaxies in order to convert them into computers. But since the planets, stars, and galaxies are so incredibly far away, perhaps the computer may alter the laws of physics so its ravenous appetite can race faster than the speed of light to consume whole star systems and galaxies. Some even believe it might consume the entire universe, so that the universe becomes intelligent.

This is the “singularity.” The word originally came from the world of relativistic physics, my personal specialty, where a singularity represents a point of infinite gravity, from which nothing can escape, such as a black hole. Because light itself cannot escape, it is a horizon beyond which we cannot see.

The idea of an AI singularity was first mentioned in 1958, in a conversation between two mathematicians, Stanislaw Ulam (who made the key breakthrough in the design of the hydrogen bomb) and John von Neumann. Ulam wrote, “One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the human race beyond which human affairs, as we know them, could not continue.” Versions of the idea have been kicking around for decades. But it was then amplified and popularized by science fiction writer and mathematician Vernor Vinge in his novels and essays.

But this leaves the crucial question unanswered: When will the singularity take place? Within our lifetimes? Perhaps in the next century? Or never? We recall that the participants at the 2009 Asilomar conference put the date at any time between 20 to 1,000 years into the future.

One man who has become the spokesperson for the singularity is inventor and bestselling author Ray Kurzweil, who has a penchant for making predictions based on the exponential growth of technology. Kurzweil once told me that when he gazes at the distant stars at night, perhaps one should be able to see some cosmic evidence of the singularity happening in some distant galaxy. With the ability to devour or rearrange whole star systems, there should be some footprint left behind by this rapidly expanding singularity. (His detractors say that he is whipping up a near-religious fervor around the singularity. However, his supporters say that he has an uncanny ability to correctly see into the future, judging by his track record.)

Kurzweil cut his teeth on the computer revolution by starting up companies in diverse fields involving pattern recognition, such as speech recognition technology, optical character recognition, and electronic keyboard instruments. In 1999, he wrote a best seller,
The Age of Spiritual Machines: When Computers Exceed Human Intelligence,
which predicted when robots will surpass us in intelligence. In 2005, he wrote
The Singularity Is Near
and elaborated on those predictions. The fateful day when computers surpass human intelligence will come in stages.

By 2019, he predicts, a $1,000 personal computer will have as much raw power as a human brain. Soon after that, computers will leave us in the dust. By 2029, a $1,000 personal computer will be 1,000 times more powerful than a human brain. By 2045, a $1,000 computer will be a billion times more intelligent than every human combined. Even small computers will surpass the ability of the entire human race.

Other books

Vanished in the Night by Eileen Carr
Flesh and Blood by Michael Lister
Fourth-Grade Disasters by Claudia Mills
13 1/2 by Nevada Barr
Blood Brotherhoods by Dickie, John
Accepting Destiny by Christa Lynn
Smart Dog by Vivian Vande Velde
The Beast by Oscar Martinez