Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (26 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
10.12Mb size Format: txt, pdf, ePub

Kurzweil was an early innovator in applying the brain’s lessons to programming. As we’ve discussed, he has argued that reverse engineering the brain is the most promising route to AGI. In an essay defending this view and his predictions about technological milestones he wrote:

Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.

Back in the 1990s, Kurzweil Computer Technologies broke new ground in voice recognition with applications designed to let doctors dictate medical reports. Kurzweil sold the company, and it became one of the roots of Nuance Communications, Inc. Whenever you use Siri it is Nuance’s algorithms that perform the speech recognition part of its magic. Speech recognition is the art of translating the spoken word to text (not to be confused with NLP, extracting meaning from written words). After Siri translates your query into text, its three other main talents come into play: its NLP facility, searching a vast knowledge database, and interacting with Internet search providers, such as OpenTable, Movietickets, and Wolfram|Alpha.

IBM's Watson is kind of a Siri on steroids, and a champion at NLP. In February 2011, it employed both brain-derived and brain-inspired systems to achieve an impressive victory against human contestants on
Jeopardy!
Like chess champion computer Deep Blue, Watson is IBM’s way of showing off its computing know-how while moving the ball down the field for AI. The long-running game show promised a formidable challenge because of its open domain of clues and its wordplay. Contestants must understand puns, similes, and cultural references, and they must phrase answers in the form of questions. However, language recognition is not something Watson specializes in. It cannot understand the spoken word. And since it cannot see or feel, it cannot read, so during the competitions the words of the
Jeopardy!
clues were hand-entered by Watson’s pit crew. And since Watson cannot
hear
either, audio and video clues were omitted.

Hey, wait a minute, did Watson really win at
Jeopardy!
or a custom-tailored variation?

Since its victory, to get Watson to understand what people say, IBM has paired it with Nuance speech recognition technology. And Watson is reading terabytes of medical literature. One of IBM’s goals is to shrink Watson down from its present size—a roomful of servers—to refrigerator-size and make it the world’s best medical diagnostician. One day not long from now you may have an appointment with a virtual assistant who’ll pepper you with questions, and provide your physician with a diagnosis. Unfortunately Watson still cannot see, and so might overlook health indicators such as clear eyes, rosy cheeks, or a fresh bullet wound. IBM also plans to put Watson on your smart phone as the ultimate Q&A app.

*   *   *

Where do Watson’s brain-derived capabilities come in? Its hardware is massively parallel, using some 3,000 parallel processors to handle 180 different software modules, themselves written for parallel processors. Parallel processing is the brain’s greatest feat, and software developers struggle to emulate it. As Granger told me, parallel processors and the software designed for them have not lived up to their hype. Why? Because the programs written for them are not good at dividing up tasks for solving in parallel. But as Watson has demonstrated, improved parallel software is changing all that, and parallel hardware is right behind. New parallel chips are being designed to hugely accelerate already existing software.

Watson showed parallelism can handle staggering computational workloads at blinding speed. But Watson’s main achievement is this—it can learn on its own. Its algorithms find correlations and patterns in the textual data its makers give it. How much data? Encyclopedias, newspapers, novels, thesauruses, all of Wikipedia, the Bible—in total about eight million thick books worth of text that it processes at 500 gigabytes (one thousand thick books) per second. Significantly, this included prepared word databases, taxonomies (words with categories and classifications), and ontologies (descriptions of words and how they relate to each other). Basically, that’s a whole lot of common sense about words. For example, “A roof is the top part of a house, not the bottom part, like a basement, or the side part, like an exterior wall.” This sentence would tell Watson a little something about roofs, houses, basements, and walls, but it’d need to know a definition of each for the sentence to make sense, and a definition for “part,” too. And it’d want to see the term used in lots of sentences. Watson has all that.

In game two of the IBM
Jeopardy!
challenge, this clue came up: “This clause in a union contract says that wages will rise or fall depending on a standard such as cost of living.” First Watson parsed the sentence, that is, it chose and analyzed its key words. Then it derived from its already digested sources that wages were something that could rise or fall, a contract contained terms about wages, and contracts contained clauses. It had another very important clue—the category heading was “Legal ‘E’s.’” That told Watson the answer would be related to a common legal term and it would start with the letter “E.” Watson beat the humans to the answer: “What is an elevator clause?” It took all of three seconds.

And after Watson got a correct answer in a category, it gained confidence (and played more boldly) because it “realized” it was interpreting the category correctly. It adapted to game play, or learned how to play better
,
while the game was in progress.

Step outside
Jeopardy!
for a moment and imagine how fast, adaptive machine learning could be tuned to drive an automobile, steer an oil tanker, or prospect for gold. Think about all that power in a human-caliber mind.

Watson demonstrated another interesting kind of intelligence, too. Its DeepQA software generates hundreds of possible answers and gathers hundreds of pieces of evidence for each answer. Then it filters and ranks the answers by its confidence level in each answer. If it doesn’t feel confident about an answer, it won’t answer at all, because in
Jeopardy!
there’s a penalty for incorrect responses. In other words, Watson knows what it doesn’t know. Now, you might not believe that a probability calculation constitutes self-awareness, but is it somewhere on a continuum that leads there? Does Watson really
know
anything?

Well, if the circuits of the brain are governed by algorithms, as Granger and others in the field of computational neuroscience assert, do
we
humans really know anything? Or put another way, maybe we both know something. And certainly Watson is a breakthrough that has a lot to teach us. Kurzweil put it like this:

A lot has been written that Watson works through statistical knowledge rather than “true” understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences.… One could just as easily refer to the distributed neurotransmitter concentrations in the human cortex as “statistical information.” Indeed, we resolve ambiguities in much the same way that Watson does by considering the likelihood of different interpretations of a phrase.

In other words, as we’ve discussed, your brain remembers information based on the strength of the electrochemical signals in the synapses that encoded that information. The greater the concentration of chemicals, the longer and stronger the information will be stored. Watson’s evidence-based probabilities are also a kind of encoding, just in computer form. Is that knowledge? This dilemma recalls John Searle’s Chinese Room puzzle from chapter 3. How will we ever know if computers are thinking or are just good mimics?

True to form, the day after Watson won the
Jeopardy!
competition Searle said, “IBM invented an ingenious program—not a computer that can think. Watson did not understand the questions, nor its answers, not that some of its answers were right and some wrong, not that it was playing a game, nor that it won—because it doesn’t understand anything.”

When asked if Watson can think, David Ferrucci, IBM’s chief scientist in charge of Watson, paraphrased Dutch computer scientist Edger Dijkstra: “Can a submarine swim?”

That is, a submarine doesn’t “swim” as a fish swims, but it gets around in the water faster than most fish, and can stay down longer than any mammal. In fact, a sub swims better than fish or mammals in some ways precisely because it does not swim like a fish or a mammal—it has different strengths and weaknesses. Watson’s intelligence is impressive, albeit narrow, because it is not like a human’s. On average it’s a heck of a lot faster. And it can do things only computers can do, like answer
Jeopardy!
questions 24/7 as long as required, and port itself to an assembly line of new Watson architectures when the need arises to seamlessly share knowledge and programming. As for whether or not Watson thinks, I vote that we trust our perceptions.

To Ken Jennings, one of Watson's human
Jeopardy!
opponents (who dubbed himself “the Great Carbon-Based Hope”), Watson
felt
like a human competitor.

The computer’s techniques for unraveling
Jeopardy!
clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a fifteen-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels “sure” enough, it decides to buzz. This is all an instant, intuitive process for a human
Jeopardy!
player, but I felt convinced that under the hood my brain was doing more or less the same thing.

Is Watson really thinking? And how much does it really understand? I’m not sure. But I am sure Watson is the first species in a brand-new ecosystem—the first machine to make us wonder if it understands.

Could Watson become the backbone of an overall AGI cognitive architecture? Well, it has the kind of backing no other single system has, including deep pockets, a company publicly willing to take on great challenges and risk failure, and a plan to finance its own development into the future, to keep it alive and forward-moving. If I ran IBM I’d take stock of the vast amounts of publicity, goodwill, sales, and science that have come out of the grand challenges of Deep Blue and Watson, and I’d announce to the world that in 2020, IBM will take on the Turing test.

*   *   *

Advances in natural language processing will transform parts of the economy that until now have seemed immune to technological change. In another few years librarians and researchers of all kinds will join retail clerks, bank tellers, travel agents, stock brokers, loan officers, and help desk technicians in the unemployment lines. Following them will be doctors, lawyers, tax and retirement consultants. Think of how quickly ATMs have all but replaced bank tellers, and how grocery store checkout lines have started phasing out human clerks. If you work in an information industry (and the digital revolution is changing
everything
into information industries), watch out.

Here’s a quick example. Like college basketball? Which of these two paragraphs was authored by a human sportswriter?

SAMPLE A

Ohio State (17) and Kansas (14) split the thirty-one possible first-place votes by coaches. The latest change at the top of the poll was necessitated after Duke was upset by ACC opponent Virginia Tech on Saturday night. The Buckeyes (27–2) defeated Big Ten foes Illinois and Indiana rather easily in finding their way back to the top. Ohio State started 24–0 and spent four weeks at number one earlier this season before falling to third. This is the fifteenth straight week that Ohio State has been ranked in the top three. Kansas (27–2) remained second and trails Ohio State by only four poll points.

SAMPLE B

Ohio State gets back the number one ranking following a week in which they first got a victory at home against Illinois, 89–70. After that came another win at home against Indiana, 82–61. Utah State enters the top twenty-five at number twenty-five with a win at home over Idaho, 84–68. Temple falls out of the rankings this week with a loss at then first-ranked Duke and a win at George Washington, 57–41. Arizona is a big mover this week to number eighteen after an upset loss at USC, 65–57 and an upset loss at UCLA, 71–49. St. John’s shot up eight spots to number fifteen after wins against then fifteenth-ranked Villanova, 81–68 and DePaul, 76–51.

Have you made your guess? Neither is any Red Smith, but just one is human. That’s the author of sample A, which appeared on an ESPN Web site. Sample B was written by an automated publishing platform created by Robbie Allen of Automated Insights. In one year his Durham, N.C.–based company has generated 100,000 automatically written sports articles and posted them on hundreds of Web sites devoted to specific teams (look for the trade name Statsheet). Why does the world need robot sportswriters? Allen told me that many teams were not covered by any journalists, leaving a vacuum for fans. And, AI’s completed articles could be sent to team Web sites and picked up by other sites just minutes after the game bell. Humans can’t work that fast. Allen, a former Cisco Systems Distinguished Engineer, wouldn’t tell me the “secret sauce” of his dazzling architecture. But soon, he said, Automated Insights will supply content for finance, weather, real estate, and local news. All his hungry servers require is semistructured data.

*   *   *

Once you’ve started examining computational neuroscience’s results, it’s hard (at least for me) to imagine significant progress being made with AGI architectures that rely solely on cognitive science. Doesn’t a complete understanding of how the brain functions at every level seem like a more certain and comprehensive path to an intelligent machine than efforts that proceed without these principles? Scientists won’t need to dissect all one hundred billion of the brain’s neurons to understand and model their functions since its structure is massively redundant. They also may not need to model the bulk of the brain, including the regions that control autonomous functions, such as breathing, heartbeat, fight or flight response, and sleep. On the other hand, it might become apparent that intelligence must reside in a body that it controls, and that body must exist in a complex environment. The embodiment debate won’t be settled here. But consider concepts such as
bright
,
sweet
,
hard
, and
sharp.
How would an AI know what these perceptions meant, or build upon them to create concepts, if it had no body? Wouldn’t there be a barrier to its becoming intelligent at a human level if it didn’t have senses?

Other books

La buena fama by Juan Valera
Out of Bounds by Kris Pearson
Kill as Directed by Ellery Queen
Some Girls Bite by Chloe Neill
Spider Dance by Carole Nelson Douglas
White Apples by Jonathan Carroll
Riley’s Billionaire by Cole, Sunny
Hard Fall by Ridley Pearson
Strings Attached by Nick Nolan