Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (14 page)

BOOK: Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100
11.36Mb size Format: txt, pdf, ePub

Judging from the headlines and the theater marquees, it looks like the last gasp for humans is just around the corner. AI pundits are solemnly asking: Will we one day have to dance behind bars as our robot creations throw peanuts at us, as we do at bears in a zoo? Or will we become lapdogs to our creations?

But upon closer examination, there is less than meets the eye. Certainly, tremendous breakthroughs have been made in the last decade, but things have to be put into perspective.

The Predator, a 27-foot drone that fires deadly missiles at terrorists from the sky, is controlled by a human with a joystick. A human, most likely a young veteran of video games, sits comfortably behind a computer screen and selects the targets. The human, not the Predator, is calling the shots. And the cars that drive themselves are not making independent decisions as they scan the horizon and turn the steering wheel; they are following a GPS map stored in their memory. So the nightmare of fully autonomous, conscious, and murderous robots is still in the distant future.

Not surprisingly, although the media hyped some of the more sensational predictions made at the Asilomar conference, most of the working scientists doing the day-to-day research in artificial intelligence were much more reserved and cautious. When asked when the machines will become as smart as us, the scientists had a surprising variety of answers, ranging from 20 to 1,000 years.

So we have to differentiate between two types of robots. The first is remote-controlled by a human or programmed and pre-scripted like a tape recorder to follow precise instructions. These robots already exist and generate headlines. They are slowly entering our homes and also the battlefield. But without a human making the decisions, they are largely useless pieces of junk. So these robots should not be confused with the second type, which is truly autonomous, the kind that can think for itself and requires no input from humans. It is these autonomous robots that have eluded scientists for the past half century.

ASIMO THE ROBOT

AI researchers often point to Honda’s robot called ASIMO (Advanced Step in Innovative Mobility) as a graphic demonstration of the revolutionary advances made in robotics. It is 4 feet 3 inches tall, weighs 119 pounds, and resembles a young boy with a black-visored helmet and a backpack. ASIMO, in fact, is remarkable: it can realistically walk, run, climb stairs, and talk. It can wander around rooms, pick up cups and trays, respond to some simple commands, and even recognize some faces. It even has a large vocabulary and can speak in different languages. ASIMO is the result of twenty years of intense work by scores of Honda scientists, who have produced a marvel of engineering.

On two separate occasions, I have had the privilege of personally interacting with ASIMO at conferences, when hosting science specials for BBC/Discovery. When I shook its hand, it responded in an entirely humanlike way. When I waved to it, it waved right back. And when I asked it to fetch me some juice, it turned around and walked toward the refreshment table with eerily human motions. Indeed, ASIMO is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside. It can even dance better than I can.

At first, it seems as if ASIMO is intelligent, capable of responding to human commands, holding a conversation, and walking around a room. Actually, the reality is quite different. When I interacted with ASIMO in front of the TV camera, every motion, every nuance was carefully scripted. In fact, it took about three hours to film a simple five-minute scene with ASIMO. And even that required a team of ASIMO handlers who were furiously reprogramming the robot on their laptops after we filmed every scene. Although ASIMO talks to you in different languages, it is actually a tape recorder playing recorded messages. It simply parrots what is programmed by a human. Although ASIMO becomes more sophisticated every year, it is incapable of independent thought. Every word, every gesture, every step has to be carefully rehearsed by ASIMO’s handlers.

Afterward, I had a candid talk with one of ASIMO’s inventors, and he admitted that ASIMO, despite its remarkably humanlike motions and actions, has the intelligence of an insect. Most of its motions have to be carefully programmed ahead of time. It can walk in a totally lifelike way, but its path has to be carefully programmed or it will stumble over the furniture, since it cannot really recognize objects around the room.

By comparison, even a cockroach can recognize objects, scurry around obstacles, look for food and mates, evade predators, plot complex escape routes, hide among the shadows, and disappear in the cracks, all within a matter of seconds.

AI researcher Thomas Dean of Brown University has admitted that the lumbering robots he is building are “just at the stage where they’re robust enough to walk down the hall without leaving huge gouges in the plaster.” As we shall later see, at present our most powerful computers can barely simulate the neurons of a mouse, and then only for a few seconds. It will take many decades of hard work before robots become as smart as a mouse, rabbit, dog or cat, and then a monkey.

HISTORY OF AI

Critics sometimes point out a pattern, that every thirty years, AI practitioners claim that superintelligent robots are just around the corner. Then, when there is a reality check, a backlash sets in.

In the 1950s, when electronic computers were first introduced after World War II, scientists dazzled the public with the notion of machines that could perform miraculous feats: picking up blocks, playing checkers, and even solving algebra problems. It seemed as if truly intelligent machines were just around the corner. The public was amazed; and soon there were magazine articles breathlessly predicting the time when a robot would be in everyone’s kitchen, cooking dinner, or cleaning the house. In 1965, AI pioneer Herbert Simon declared, “Machines will be capable, within twenty years, of doing any work a man can do.” But then the reality set in. Chess-playing machines could not win against a human expert, and could play only chess, nothing more. These early robots were like a one-trick pony, performing just one simple task.

In fact, in the 1950s, real breakthroughs were made in AI, but because the progress was vastly overstated and overhyped, a backlash set in. In 1974, under a chorus of rising criticism, the U.S. and British governments cut off funding. The first AI winter set in.

Today, AI researcher Paul Abrahams shakes his head when he looks back at those heady times in the 1950s when he was a graduate student at MIT and anything seemed possible. He recalled, “It’s as though a group of people had proposed to build a tower to the moon. Each year they point with pride at how much higher the tower is than it was the previous year. The only trouble is that the moon isn’t getting much closer.”

In the 1980s, enthusiasm for AI peaked once again. This time the Pentagon poured millions of dollars into projects like the smart truck, which was supposed to travel behind enemy lines, do reconnaissance, rescue U.S. troops, and return to headquarters, all by itself. The Japanese government even put its full weight behind the ambitious Fifth Generation Computer Systems Project, sponsored by the powerful Japanese Ministry of International Trade and Industry. The Fifth Generation Project’s goal was, among others, to have a computer system that could speak conversational language, have full reasoning ability, and even anticipate what we want, all by the 1990s.

Unfortunately, the only thing that the smart truck did was get lost. And the Fifth Generation Project, after much fanfare, was quietly dropped without explanation. Once again, the rhetoric far outpaced the reality. In fact, there were real gains made in AI in the 1980s, but because progress was again overhyped, a second backlash set in, creating the second AI winter, in which funding again dried up and disillusioned people left the field in droves. It became painfully clear that something was missing.

In 1992 AI researchers had mixed feelings holding a special celebration in honor of the movie
2001,
in which a computer called HAL 9000 runs amok and slaughters the crew of a spaceship. The movie, filmed in 1968, predicted that by 1992 there would be robots that could freely converse with any human on almost any topic and also command a spaceship. Unfortunately, it was painfully clear that the most advanced robots had a hard time keeping up with the intelligence of a bug.

In 1997 IBM’s Deep Blue accomplished a historic breakthrough by decisively beating the world chess champion Gary Kasparov. Deep Blue was an engineering marvel, computing 11 billion operations per second. However, instead of opening the floodgates of artificial intelligence research and ushering in a new age, it did precisely the opposite. It highlighted only the primitiveness of AI research. Upon reflection, it was obvious to many that Deep Blue could not think. It was superb at chess but would score 0 on an IQ exam. After this victory, it was the loser, Kasparov, who did all the talking to the press, since Deep Blue could not talk at all. Grudgingly, AI researchers began to appreciate the fact that brute computational power does not equal intelligence. AI researcher Richard Heckler says, “Today, you can buy chess programs for $49 that will beat all but world champions, yet no one thinks they’re intelligent.”

But with Moore’s law spewing out new generations of computers every eighteen months, sooner or later the old pessimism of the past generation will be gradually forgotten and a new generation of bright enthusiasts will take over, creating renewed optimism and energy in the once-dormant field. Thirty years after the last AI winter set in, computers have advanced enough so that the new generation of AI researchers are again making hopeful predictions about the future. The time has finally come for AI, say its supporters. This time, it’s for real. The third try is the lucky charm. But if they are right, are humans soon to be obsolete?

IS THE BRAIN A DIGITAL COMPUTER?

One fundamental problem, as mathematicians now realize, is that they made a crucial error fifty years ago in thinking the brain was analogous to a large digital computer. But now it is painfully obvious that it isn’t. The brain has no Pentium chip, no Windows operating system, no application software, no CPU, no programming, and no subroutines that typify a modern digital computer. In fact, the architecture of digital computers is quite different from that of the brain, which is a learning machine of some sort, a collection of neurons that constantly rewires itself every time it learns a task. (A PC, however, does not learn at all. Your computer is just as dumb today as it was yesterday.)

So there are at least two approaches to modeling the brain. The first, the traditional top-down approach, is to treat robots like digital computers, and program all the rules of intelligence from the very beginning. A digital computer, in turn, can be broken down into something called a Turing machine, a hypothetical device introduced by the great British mathematician Alan Turing. A Turing machine consists of three basic components: an input, a central processor that digests this data, and an output. All digital computers are based on this simple model. The goal of this approach is to have a CD-ROM that has all the rules of intelligence codified on it. By inserting this disk, the computer suddenly springs to life and becomes intelligent. So this mythical CD-ROM contains all the software necessary to create intelligent machines.

However, our brain has no programming or software at all. Our brain is more like a “neural network,” a complex jumble of neurons that constantly rewires itself.

Neural networks follow Hebb’s rule: every time a correct decision is made, those neural pathways are reinforced. It does this by simply changing the strength of certain electrical connections between neurons every time it successfully performs a task. (Hebb’s rule can be expressed by the old question: How does a musician get to Carnegie Hall? Answer: practice, practice, practice. For a neural network, practice makes perfect. Hebb’s rule also explains why bad habits are so difficult to break, since the neural pathway for a bad habit is so well-worn.)

Neural networks are based on the bottom-up approach. Instead of being spoon-fed all the rules of intelligence, neural networks learn them the way a baby learns, by bumping into things and learning by experience. Instead of being programmed, neural networks learn the old-fashioned way, through the “school of hard knocks.”

Neural networks have a completely different architecture from that of digital computers. If you remove a single transistor in the digital computer’s central processor, the computer will fail. However, if you remove large chunks of the human brain, it can still function, with other parts taking over for the missing pieces. Also, it is possible to localize precisely where the digital computer “thinks”: its central processor. However, scans of the human brain clearly show that thinking is spread out over large parts of the brain. Different sectors light up in precise sequence, as if thoughts were being bounced around like a Ping-Pong ball.

Digital computers can calculate at nearly the speed of light. The human brain, by contrast, is incredibly slow. Nerve impulses travel at an excruciatingly slow pace of about 200 miles per hour. But the brain more than makes up for this because it is massively parallel, that is, it has 100 billion neurons operating at the same time, each one performing a tiny bit of computation, with each neuron connected to 10,000 other neurons. In a race, a superfast single processor is left in the dust by a superslow parallel processor. (This goes back to the old riddle: if one cat can eat one mouse in one minute, how long does it take a million cats to eat a million mice? Answer: one minute.)

In addition, the brain is not digital. Transistors are gates that can either be open or closed, represented by a 1 or 0. Neurons, too, are digital (they can fire or not fire), but they can also be analog, transmitting continuous signals as well as discrete ones.

TWO PROBLEMS WITH ROBOTS

Given the glaring limitations of computers compared to the human brain, one can appreciate why computers have not been able to accomplish two key tasks that humans perform effortlessly: pattern recognition and common sense. These two problems have defied solution for the past half century. This is the main reason why we do not have robot maids, butlers, and secretaries.

Other books

Rapture by Kameron Hurley
A Kink in Her Tails by Sahara Kelly
Dearest Vicky, Darling Fritz by John Van der Kiste
Hand of the Black City by Bryce O'Connor
The Devil in Jerusalem by Naomi Ragen
Soul Hostage by Littorno, Jeffrey
La Silla del Águila by Carlos Fuentes
Bundle of Trouble by Diana Orgain
Shamara by Catherine Spangler