Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (21 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
3.81Mb size Format: txt, pdf, ePub

Eurisko’s greatest success came when Lenat pitted it against human opponents in a virtual war game called Traveller Trillion Credit Squadron. In the game, players operating on a fixed budget designed ships in a hypothetical fleet and battled other fleets. Variables included the number and type of ships, the thickness of their hulls, the number and type of guns, and more. Eurisko evolved a fleet, tested it against hypothetical fleets, took the best parts of the winning forces and combined them, added mutations, and so on, in a digital imitation of natural selection. After 10,000 battles, run on a hundred linked PCs, Eurisko had evolved a fleet consisting of many stationary ships with heavy armor and few weapons. By contrast, most competitors fielded speedy midsized ships with powerful weapons. Eurisko’s opponents all suffered the same fate—at the end of the game their ships were all sunk, while half of Eurisko’s were still afloat. Eurisko easily took the 1981 prize. The next year Traveler organizers changed the rules and didn’t release them in time for Eurisko to run thousands of battles. But the program had derived effective rules of thumb from its prior experience, so it didn’t need many iterations. It easily won again. In 1983 the game organizers threatened to terminate the competition if Eurisko took the prize for a third consecutive year. Lenat withdrew.

Once during an operation, Eurisko created a rule that quickly achieved the highest value, or fitness. Lenat and his team looked hard to see what was so great about the rule. It turned out that whenever a proposed solution to a problem won a high evaluation, this rule attached its own name to it, raising its own “value.” This was a clever but incomplete notion of value. Eurisko lacked the contextual understanding that bending the rules didn’t contribute to winning games. That’s when Lenat set about compiling a vast database of what Eurisko lacked—common sense. Cyc, the commonsense database that’s taken a thousand person years to hand code, was born.

Lenat has never released the source code for Eurisko, which makes some in the AI blogosphere speculate that he either intends to someday resurrect it, or he’s worried that someone else will. Significantly, the man who’s written more than anyone else about the dangers of AI, Eliezer Yudkowsky, thinks the 1980s era algorithm is the closest scientists have come to date to creating a truly self-improving AI system. He urges programmers not to bring it back to life.

*   *   *

Our first assumption is that for an intelligence explosion to occur, the AGI system in question must be self-improving, in the manner of Eurisko, and self-aware.

Let’s make one more assumption while we’re at it, before considering bottlenecks and barriers. As a self-aware and self-improving AI’s intelligence increases, its efficiency drive would compel it to make its code as compact as possible, and squeeze as much intelligence as it could into the hardware it was born in. Still, the hardware that’s available to it could be a limiting factor. For example, what if its environment doesn’t have enough storage space for the AI to make copies of itself, for self-improvement and security reasons? Making improved iterations is at the heart of Good’s intelligence explosion. This is why for the Busy Child scenario I proposed its intelligence explosion take place on a nice, roomy supercomputer.

The elasticity of an AI’s environment is a huge factor in the growth of its intelligence. But it’s an easily solved one. First, as we learned from Kurzweil’s LOAR, computer speed and capacity double in as little as a year, every year. That means whatever hardware requirements an AGI system requires today should be satisfied on average by
half
the hardware, and cost, a year from now.

Second, the accessibility of cloud computing. Cloud computing permits users to rent computing power and capacity over the Internet. Vendors like Amazon, Google, and Rackspace offer users a choice of processor speeds, operating systems, and storage space. Computer power has become a service instead of a hardware investment. Anyone with a credit card and some know-how can rent a virtual supercomputer. On Amazon’s EC2 cloud computing service, for instance, a vendor called Cycle Computing created a 30,000-processor cluster they named Nekomata (Japanese for Monster Cat). Every eight processors of its 30,000 came with seven gigabytes of RAM (about as much random access memory as a PC has), for a total of 26.7 terabytes of RAM and two petabytes of disk space (that’s equal to forty million, four-drawer filing cabinets full of text). The Monster Cat’s job? Modeling the molecular behavior of new drug compounds for a pharmaceutical company. That’s a task roughly as difficult as modeling weather systems.

To complete its task, Nekomata ran for seven hours at a cost of under $9,000.00. It was, during its brief life, a supercomputer, one of the world’s five hundred fastest. If a sole PC had taken on the job, it would’ve taken eleven years. Cycle Computing’s scientists set up the Amazon EC2 cloud array remotely, from their own offices, but software managed the work. That’s because, as a company spokesman put it, “There is no way that any mere human could keep track of all of the moving parts on a cluster of this scale.”

So, our second assumption is that the AGI system has sufficient space to grow to superintelligence. What, then, are the limiting factors to an intelligence explosion?

Let’s consider economics first. Could funding for creating an AGI peter out to nothing? What if no business or government saw value in creating machines of human-level intelligence, or, just as crippling, what if they perceived the problem as too hard to accomplish, and chose not to invest?

That would leave AGI scientists in a pickle. They’d be forced to shop out elements of their grand architectures for comparatively mundane tasks like data mining, or stock buying. They’d have to find day jobs. Well, with some notable exceptions, that’s more or less the state of affairs right now, and even so, AGI research is moving steadily ahead.

Consider how Goertzel’s OpenCog stays afloat. Parts of its architecture are up and running, and busily analyzing biological data and solving power grid problems, for a fee. Profits go back into research-and-development for OpenCog.

Numenta, Inc., brainchild of Jeff Hawkins, the creator of the Palm Pilot and Treo, earns its living by working inside electrical power supplies to anticipate failures.

For about a decade, Peter Voss developed his AGI company, Adaptive AI, in “stealth” mode, widely lecturing about AGI but not revealing how he planned to tackle it. Then in 2007 he launched Smart Action, a company that uses Adaptive AI’s technology to empower Virtual Agents. They are customer-service telephone chatbots that ace NLP skills to engage customers in nuanced purchase-related exchanges.

The University of Memphis’ LIDA (Learning Intelligent Distributed Agent) probably doesn’t have to worry about where its next upgrade is coming from. An AGI cognitive architecture something like OpenCog, LIDA’S development funding came, in part, from the United States Navy. LIDA is based on an architecture (called IDA) used by the navy to find jobs for sailors whose assignments are about to end. And in doing so “she” displays nascent human cognitive abilities, or so says her press department:

She selects jobs to offer a sailor, taking into account the Navy’s policies, the job’s needs, the sailor’s preferences, and her own deliberation about feasible dates. Then she negotiates with the sailor, in English via iterative e-mails, about job selection. IDA loops through a cognitive cycle in which she perceives the environments, internal and external; creates meaning, by interpreting the environment and deciding what is important; and answers the only question there is [for sailors]: “What do I do next?”

Finally, as we discussed in chapter 3, there are many AGI projects underway right now that are purposefully flying under the radar. So-called stealth companies are often out in the open about their goals, like Voss’s Adaptive AI, but mum about their technique. That’s because they don’t want to reveal their technology to competitors and copycats or become targets for espionage. Other stealth companies are under the radar, but not shy about soliciting investments. Siri, the company that created the well-received NLP-ready personal assistant for the Apple iPhone, was incorporated as, literally, “Stealth Company.” Here’s the prelaunch pitch from their Web site:

We are forming Silicon Valley’s next great company. We aim to fundamentally redesign the face of consumer Internet. Our policy is to stay stealthy, as we secretly put the finishing touches on the Next Big Thing. Sooner than you think, we will reveal our story in grand fashion …

Now, let’s consider the issue of funding and DARPA, and a strange looping tale that leads back to Siri.

From the 1960s through the 1990s, DARPA funded more AI research than private corporations and any other branch of the government. Without DARPA funding, the computer revolution may not have taken place; if artificial intelligence ever got off the ground, it would’ve taken years longer. During AI’s “golden age” in the 1960s, the agency invested in basic AI research at CMU, MIT, Stanford, and the Stanford Research Institute. AI work continues to thrive at these institutions, and, significantly, all but Stanford have openly acknowledged plans to create AGI, or something very much like it.

Many know that DARPA (then called ARPA) funded the research that invented the Internet (initially called ARPANET), as well as the researchers who developed the now ubiquitous GUI, or Graphical User Interface, a version of which you probably see every time you use a computer or smart phone. But the agency was also a major backer of parallel processing hardware and software, distributed computing, computer vision, and natural language processing (NLP). These contributions to the foundations of computer science are as important to AI as the results-oriented funding that characterizes DARPA today.

How is DARPA spending its money? A recent annual budget allocates $61.3 million to a category called Machine Learning, and $49.3 million to Cognitive Computing. But AI projects are also funded under Information and Communication Technology, $400.5 million, and Classified Programs, $107.2 million.

As described in DARPA’s budget, Cognitive Computing’s goals are every bit as ambitious as you might imagine.

The Cognitive Computing Systems program … is developing the next revolution in computing and information processing technology that will enable computational systems to have reasoning and learning capabilities and levels of autonomy far beyond those of today’s systems.

The ability to reason, learn and adapt will raise computing to new levels of capability and powerful new applications. The Cognitive Computing project will develop core technologies that enable computing systems to learn, reason and apply knowledge gained through experience, and respond intelligently to things that have not been previously encountered.

These technologies will lead to systems demonstrating increased self-reliance, self-adaptive reconfiguration, intelligent negotiation, cooperative behavior and survivability with reduced human intervention.

If that sounds like AGI to you, that’s because there are good reasons to believe it is. DARPA doesn’t do research and development itself, it funds others to do it, so the cash in its budget goes to (mostly) universities in the form of research grants. So, in addition to the AGI projects we’ve discussed, whose creators are spinning off profitable by-products to fund their path to AGI, a smaller but better funded group, anchored at the aforementioned institutions, is supported by DARPA. For example, IBM’s SyNAPSE, which we discussed in chapter 4, is a wholly DARPA-funded attempt to build a computer with a mammalian brain’s massively parallel form and function. That brain will go first into robots meant to match the intelligence of mice and cats, and ultimately into humanoid robots. Over eight years, SyNAPSE has cost DARPA $102.6 million. Similarly, CMU’s NELL is mostly funded by DARPA, with additional help from Google and Yahoo.

Now let’s work our way back to Siri. CALO was the DARPA-funded project to create the Cognitive Assistant that Learns and Organizes
,
kind of a computerized Radar O’Reilly for officers. The name was inspired by “calonis,” a Latin word meaning “soldier’s servant.” CALO was born at SRI International, formerly the Stanford Research Institute, a company created to spin off commercial projects from the university’s research. CALO’s purpose? According to SRI’s Web site:

The goal of the project is to create cognitive software systems, that is, systems that can reason, learn from experience, be told what to do, explain what they are doing, reflect on their experience, and respond robustly to surprise.

Within its own cognitive architecture CALO was supposed to bring together AI tools, including natural language processing, machine learning, knowledge representation, human-computer interaction, and flexible planning. DARPA funded CALO from 2003–2008 and involved three hundred researchers from twenty-five institutions, including Boeing Phantom Works, Carnegie Mellon, Harvard, and Yale. In four years the research generated more than five hundred publications in many fields related to AI. And it cost U.S. taxpayers $150 million.

But, CALO didn’t work as well as intended. Still, part of it showed promise—the “do engine” (in contrast to search engine) that “did” things like take dictation for e-mails and texts, perform calculations and conversions, look up flight info, and set reminders. SRI International, the company coordinating the whole enterprise, spun off Siri (briefly named Stealth Company) to gather $25 million in additional investment, and develop the “do engine.” In 2008, Apple Computer bought Siri for around $200 million.

Today Siri is deeply integrated into IOS, the iPhone’s operating system. It’s a fraction of what CALO promised to be, but it’s a darn sight more clever than most smart phone applications. And the soldiers who were supposed to get CALO? They’ll be making out too—the army will take iPhones into battle, preloaded with Siri and classified combat-specific apps.

So, one
big
reason why funding won’t be a bottleneck for AGI and won’t slow an intelligence explosion is that we live in a world in which taxpayers like you and me are paying for AGI development ourselves, one smart component at a time, through DARPA (Siri), the navy (LIDA), and other overt and covert limbs of our government. Then we’re paying for it again, as an important new feature in our iPhones and computers. In fact, SRI International has spun off
another
CALO-initiated product called Trapit. It’s a “content concierge,” a personalized search and Web discovery tool that finds Web content that interests you and displays it in one place.

Other books

Leonardo Da Vinci by Kathleen Krull
Dreamers of a New Day by Sheila Rowbotham
Thunder Bay by William Kent Krueger
Sofia by Ann Chamberlin
Thirty Sunsets by Christine Hurley Deriso
Guardian Hound by Cutter, Leah
Birth Marks by Sarah Dunant
Awake the Cullers (History of Ondar) by Young, Amanda, Young Jr., Raymond