War: What is it good for? (63 page)

BOOK: War: What is it good for?
4.36Mb size Format: txt, pdf, ePub

By 2011, air force drones had logged a million active-service flight hours, flying two thousand sorties in that year alone. The typical mission involves drones loitering fifteen thousand feet above a suspect, unseen and unheard, for up to three weeks. Sophisticated cameras (which account for a quarter of the cost of an MQ-1 Predator) record the target's every move, beaming pictures back through a chain of satellites and relay stations to Creech Air Force Base in Nevada. Here, two-person crews sit in cramped but cool and comfortable trailers (I had the opportunity to visit one in 2013
9
) for hour after hour, watching the glowing monitors to establish the suspects' “patterns of life.”

Much of the time, the mission goes nowhere. The suspect turns out to be just an ordinary Afghan, falsely fingered by an angry or hypervigilant
neighbor. But if the cameras do record suspicious behavior, ground forces are called in to make an arrest, usually in the dead of night to reduce the risk of a shoot-out. If alert insurgents—woken by the roar of helicopters and Humvees—creep or run away (“leakers” and “squirters,” air force pilots call them), a drone “sparkles” them with infrared lasers, invisible to the naked eye but allowing troops with night-vision gear to make arrests at their own convenience. The mere possibility of attracting drones' attention has hamstrung jihadists: the best plan, an advice sheet for Malian insurgents warned in 2012, was to “maintain complete silence of all wireless contacts” and to “avoid gathering in open areas”—hardly a recipe for effective operations.

Drones have become the eyes and ears of counterinsurgency in Afghanistan, and in about 1 percent of missions they also become its teeth. Tight rules of engagement bind air force crews, but when a suspect does something clearly hostile—such as setting up a mortar in the back of a truck—the pilot can squeeze a trigger on a joystick back in Nevada, killing the insurgent with a precision-guided Hellfire missile. (In Pakistan and Yemen, where the United States is technically not at war, the CIA has separate, secret drone programs. With different rules of engagement and fewer options to use ground forces, these probably use missiles and bombs more often than the air force, but here too, civilian casualties fell sharply between 2010 and 2013.)

Drones are the thin end of a robotic wedge, which is breaking apart conventional fighting done by humans. The wedge has not widened as quickly as some people expected (in 2003, a report from the U.S. Joint Forces Command speculated that “between 2015 and 2025 … the joint force could be largely robotic at the tactical level”), but neither has it gone as slowly as some naysayers thought. “It is doubtful that computers will ever be smart enough to do all of the fighting,” the historian Max Boot argued in 2006, leading him to predict that “machines will [only] be called upon to perform work that is dull, dirty, or dangerous.”

The actual outcome will probably be somewhere between these extremes, with the trend of the last forty years toward machines taking over the fastest and most technically sophisticated kinds of combat accelerating in the coming forty. At present, drones can only operate if manned aircraft first establish air superiority, because the slow-moving robots would be sitting ducks if a near-peer rival contested the skies with fighters, surface-to-air missiles, or signal jammers. Flying a drone over Afghanistan from a trailer in Nevada is an odd, out-of-body experience (I was given a few
minutes on a simulator at Creech Air Force Base), because the delay between your hand moving the joystick and the aircraft responding can be as much as a second and a half as the signal races around the world through relay stations and satellite links. Better communications, or putting the pilots in trailers in theater, can shorten the delay, but the finite speed of light means it will never go away. In the
Top Gun
world of supersonic dogfights, milliseconds matter, and remotely piloted aircraft will never be able to compete with manned fighters.

The solution, an air force study suggested in 2009, might be to shift from keeping humans in the loop, remotely flying the aircraft, to having them merely “on the loop.” By this, the air force means deploying mixed formations, with a manned plane acting as wing leader for three unmanned aircraft. Each robot would have its own task (air-to-air combat, suppressing ground fire, bombing, and so on), with the wing leader “monitoring the execution of certain decisions.” The wing leader could override the robots, but “advances in AI [artificial intelligence] will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.”

Unmanned jet fighters are already being tested, and in July 2013 one even landed on the rolling deck of an aircraft carrier (
Figure 7.11
), one of the most difficult tasks a (human) navy flier ever has to perform. By the late 2040s, the air force suggests, “technology will be able to reduce the time to complete the OODA [observe, orient, decide, and act] loop to micro- or nano-seconds.” But if—when—we reach that point, the obvious question will come up: Why keep humans on the loop at all?

Figure 7.11. Look, no hands! A Northrop Grumman X.47B robot stealth fighter roars past the USS
George H. W. Bush
in 2013, just before becoming the first unmanned plane ever to land itself on the deck of an aircraft carrier.

The answer is equally obvious: because we do not trust our machines. If the Soviets had trusted Petrov's algorithms in 1983, perhaps none of us would be here now, and when the crew of the USS
Vincennes
did trust their machines in 1988, they shot down an Iranian passenger jet, killing 290 civilians. No one wants more of that. “We already don't understand Microsoft Windows,” a researcher at Princeton University's Program on Science and Global Security jokes, and so “we're certainly not going to understand something as complex as a humanlike intelligence. Why,” he goes on to ask, “should we create something like that and then arm it?”

Once again, the answer is obvious: because we will have no choice. The United Nations has demanded a moratorium on what it calls “lethal autonomous robotics,” and an international Campaign to Stop Killer Robots is gaining traction, but when hypersonic fighter planes clash in the 2050s,
robots with OODA loops of nanoseconds will kill humans with OODA loops of milliseconds, and there will be no more debate. As in every other revolution in military affairs, people will make new weapons because if they do not, their enemies might do so first.

Battle, the former U.S. Army lieutenant colonel Thomas Adams suggests, is already moving beyond “human space” as weapons become “too fast, too small, too numerous, and … create an environment too complex for humans to direct.” Robotics is “rapidly taking us to a place where we may not want to go, but probably are unable to avoid.” (I heard a joke at Nellis Air Force Base: the air force of the future will consist of just a man, a dog, and a computer. The man's job will be to feed the dog, and the dog's job will be to stop the man from touching the computer.)

Current trends suggest that robots will begin taking over our fighting in the 2040s—just around the time, the trends also suggest, that the globocop will be losing control of the international order. In the 1910s, the combination of a weakening globocop and revolutionary new fighting machines (dreadnoughts, machine guns, aircraft, quick-firing artillery, internal combustion engines) ended a century of smaller, less bloody wars and set off a storm of steel. The 2040s promise a similar combination.

Opinions vary over whether this will bring similar or even worse results than the 1910s saw. In the most detailed (or, according to taste, most speculative) discussion, the strategic forecaster George Friedman has argued that hugely sophisticated space-based intelligence systems will dominate war by 2050. He expects American power to be anchored on a string of these great space stations, surrounded and protected by dozens of smaller satellites, in much the same way that destroyers and frigates protect contemporary aircraft carriers. These orbiting flotillas will police the earth below, partly by firing missiles but mainly by collecting and analyzing data, coordinating swarms of hypersonic robot planes, and guiding ground battles in which, suggests Friedman, “the key weapon will be the armored infantryman—a single soldier, encased in a powered suit … Think of him as a one-man tank, only more lethal.”

The focus of mid-twenty-first-century fighting—what Clausewitz called the
Schwerpunkt—will
be cyber and kinetic battles to blind the space flotillas, followed by attacks on the power plants that generate the vast amounts of energy that the robots will need. “Electricity,” Friedman speculates, “will be to war in the twenty-first century as petroleum was to war in the twentieth.” He foresees “a world war in the truest sense of the word—but given the technological advances in precision and speed, it won't be total war.” What Friedman means by this is that civilians will be bystanders, looking on anxiously as robotically augmented warriors battle it out. Once one side starts losing the robotic war, its position will quickly become hopeless, leaving surrender or slaughter as the only options. The war will then end, leaving not the billion dead of Petrov's day, or even the hundred million of Hitler's, but, Friedman estimates, more like fifty thousand—only slightly more than die each year in automobile accidents in the United States.

I would like to believe this relatively sunny scenario—who wouldn't?—but the lessons of the last ten millennia of fighting make it difficult. The first time I raised the idea of revolutions in military affairs, back in
Chapter 2
, I observed that there is no new thing under the sun. Nearly four thousand years ago, soldiers in southwest Asia had already augmented the merely human warrior by combining him with horses. These augmented warriors—charioteers—literally rode rings around unaugmented warriors plodding along on foot, with results that were, in one way, very like what Friedman predicts. When one side lost a chariot fight around 1400
B.C.
, its foot soldiers and civilians found themselves in a hopeless position. Surrender and slaughter were their only options.

New kinds of augmentation were invented in first-millennium-
B.C.
India, where humans riding on elephants dominated battlefields, and on the steppes in the first millennium
A.D.
, where bigger horses were added to humans to produce cavalry. In each case, once battle was joined, foot soldiers and civilians often just had to wait as the pachyderms or horsemen fought it out, hoping for the best. Once again, whoever lost the animal-augmented fight was in a hopeless position.

But there the similarities with Friedman's scenario end. Chariots, elephants, and cavalry did not mount surgical strikes, skillfully destroying the other side's chariots, elephants, and cavalry and then stopping. Battles did not lead to cool calculations and the negotiated surrender of defenseless infantry and civilians. Instead, wars were no-holds-barred frenzies of violence. When the dust settled after the high-tech horse and elephant fighting, the losers regularly got slaughtered whether they surrendered or not. The age of chariots saw one atrocity after another; the age of elephants was so appalling that the Mauryan king Ashoka foreswore violence in 260
B.C.
; and the age of cavalry, all the way from Attila the Hun to Genghis Khan, was worse than either.

All the signs—particularly on the nuclear front—suggest that major wars in the mid-twenty-first century will look more like these earlier conflicts than Friedman's optimistic account. We are already, according to the political scientist Paul Bracken, moving into a Second Nuclear Age. The First Nuclear Age—the Soviet-American confrontation of the 1940s–80s—was scary but simple, because mutual assured destruction produced stability (of a kind). The Second Age, by contrast, is for the moment not quite so scary, because the number of warheads is so much smaller, but it is very far from simple. It has more players than the Cold War, using smaller forces and following few if any agreed-on rules. Mutual assured destruction no longer applies, because India, Pakistan, and Israel (if or when Iran goes nuclear) know that a first strike against their regional rival could conceivably take out its second-strike capability. So far, antimissile defenses and the globocop's guarantees have kept order. But if the globocop does lose credibility in the 2030s and after, nuclear proliferation, arms races, and even preemptive attacks may start to make sense.

If major war comes in the 2040s or '50s, there is a very good chance that it will begin not with a quarantined, high-tech battle between the great powers' computers, space stations, and robots but with nuclear wars in South, southwest, or East Asia that expand to draw in everyone else. A Third
World War will probably be as messy and furious as the first two, and much, much bloodier. We should expect massive cyber, space, robotic, chemical, and nuclear onslaughts, hurled against the enemy's digital and antimissile shields like futuristic broadswords smashing at a suit of armor, and when the armor cracks, as it eventually will, storms of fire, radiation, and disease will pour through onto the defenseless bodies on the other side. Quite possibly, as in so many battles in the past, neither side will really know whether it is winning or losing until disaster suddenly overtakes it or the enemy—or both at once.

Other books

Trio by Robert Pinget
The Captive Condition by Kevin P. Keating
Base Camp by H. I. Larry
A Second Chance With Emily by Alyssa Lindsey
The Mark of Ran by Paul Kearney
These Happy Golden Years by Wilder, Laura Ingalls
The Protected by Claire Zorn