In October 1996 he joined forces with David Jackson and Yan Kagan at UCLA and Francesco Mulargia at the University of Bologna to write a critique for
Science
that appeared under the provocative headline “Earthquakes Cannot Be Predicted.” In the article they cast doubt on the Haicheng prediction story, suggesting that political pressures had led to exaggerated claims. They wrote that there are “strong reasons to doubt” that any observable, identifiable, or reliable precursors exist. They pointed out that long-term predictions both for the Tokai region in Japan and for Parkfield had failed while other damaging jolts (Loma Prieta, Landers, and Northridge in California, plus Okushiri Island and Kobe in Japan) had
not
been predicted. They cautioned that false hopes about the effectiveness of prediction efforts had already created negative side effects.
After the frightening and damaging Northridge temblor in southern California, for example, stories began to spread that an even larger quake was about to happen but that scientists were keeping quiet to avoid causing panic. The gossip became so widespread that Caltech seismologists felt compelled to issue a denial: “Aftershocks will continue. However, the rumor of the prediction of a major earthquake is false. Caltech cannot release predictions since it is impossible to predict earthquakes.”
Not surprisingly the article spawned a series of energetic replies from those who felt the baby of prediction science should not be thrown out with the bathwater of uncertainty. Max Wyss at the University of Alaska took issue with almost every point made by Geller and company. He countered that in 10 to 30 percent of large quakes foreshocks do occur and are precursors, that strain is released in earthquakes only after it has been accumulated for centuries, and that measuring the build-up of stresses within the crust is therefore not a waste of time and money.
Wyss concluded that most experts living at the time of Columbus would have said it was impossible to reach India by sailing west from Europe and that “funds should not be wasted on such a folly.” And while Geller et al seemed to be making a similar mistake, Wyss doubted that “human curiosity and ingenuity can be prevented in the long run.” The secrets of quake prediction would be unlocked sooner or later. Richard Aceves and Stephen Park at University of California Riverside suggested it was premature to give up on prediction. “The length of an experiment,” they wrote, “should not be an argument against the potential value of the eventual results.”
In a later article Geller repeated his contention that “people would be far better off living and working in buildings that were designed to withstand earthquakes when they did occur.” He insisted that the “incorrect impression” quakes can be foretold leads to “wasting funds on pointless prediction research,” diverting resources from more “practical precautions that could save lives and reduce property damage when a quake comes.”
In the spring of 1997 someone with inside knowledge leaked a government document that slammed Japan's vaunted $147 million a year prediction research program. The confidential review, published in the
Yomiuri Shimbun,
quoted Masayuki Kikuchi, a seismologist at the University of Tokyo's Earthquake Research Institute, as saying that “trying to predict earthquakes is unreasonable.” After thirty-two years of trying, all those scientists and all that high-tech equipment had failed to meet the stated goal of warning the public of impending earthquakes.
The report said the government should admit that seismic forecasting was not currently possible and shift the program's focus. It was the sharpest criticism ever, and it did eventually lead to a change in direction. With so much invested and so much more at stake, though, there was no way the whole campaign would be ditched. People in Japan are intimately aware of earthquakes and the public desire for some kind of warningâwhether
unreasonable
or notâis a political reality that cannot be ignored.
Faults in the Tokai region off the coast of Japanâwhere
three
tectonic plates come togetherâhave rattled the earth repeatedly and people worry about the next one. The subduction zone there tore apart in 1854, the great Tokyo earthquake of 1923 killed more than 140,000 people, two more big fault breaks occurred in the 1940s, and another magnitude 8 is expected any day now.
Japan's first five-year prediction research plan was launched in 1965. In 1978, with still no sign of an impending quake, the program was ramped up with passage of the Large-Scale Earthquake Countermeasures Act, which concentrated most of the nation's seismic brain power and technical resources on the so-called Tokai Gap. Whenever some anomaly is observed by the monitoring network, a special evaluation committee of technical expertsâknown locally as “the six wise men”âmust be paged and rushed by police cars to a command center in Tokyo,
where they will gather around a conference table and focus on the data stream. Then very quickly they must decide whether or not to call the prime minister.
If the anomaly is identified as a reliable precursor, only the prime minister has the authority to issue a warning to Tokyo's thirteen million residents. If and when that day comes, a large-scale emergency operation will be initiated almost immediately. Bullet trains and factory production lines will be stopped, gas pipelines will shut down, highway traffic will be diverted, schools will be evacuated, and businesses will close. According to one study a shutdown like that would cost the Japanese economy as much as $7 billion per day, so the six wise men can't afford to get it wrong. False alarms would be exceedingly unwelcome.
Even though the people of Japan still tell their leaders they want some kind of warning system, they were not at all impressed with what happened in Kobe in 1995. With all those smart people and so much equipment focused on the Tokai Gap, apparently nobody saw the Kobe quake coming. It was an ugly surprise from a fault that was not considered a threat. It killed more than six thousand people.
In spite of the embarrassing setback, Kiyoo Mogi, a professor at Nihon University and then chair of the wise men's committee, defended the prediction program, calling it Japan's moral obligation not only to its own citizens but to people in poorer, quake-prone countries around the world as well. “Can we give up efforts at prediction and just passively wait for a big one?” he asked. “I don't think so.” What Mogi did was try to change the rules.
He argued that a definite “yes or no” predictionâas the six wise men are required by law to makeâwas beyond Japan's technical capability with the knowledge and equipment available. Instead, he suggested the warnings be graded with some level of probability, expressed like weather forecasts. The government could say, for example, that there's a 40 percent chance of an earthquake this week. People would be made aware that it
might
happen and that they ought to prepare themselves.
Mogi's idea was rejected, so he resigned from the committee in 1996. The program carried on but it gradually changed direction. In the aftermath of Kobe prediction research spending actually increased again with installation of a dense web of GPS stations to monitor crustal movement and strain build-up. But by September 2003, with all the new equipment up and running, an array of 1,224 GPS stations and about 1,000 seismometers failed to spot any symptoms of the magnitude 8.3 Tokachi-Oki earthquake. It came as another rude shock.
The prediction team had also started work on what they called a “real-time seismic warning system.” Japanese scientists were hoping to use super-fast technology to reduce the extent and severity of damage once a fault had begun to slip. They loaded a supercomputer with 100,000 preprogrammed scenarios based on the magnitude and exact location of the coming temblor. As soon as the ground began to shake instruments would feed data to the computer and the computer would spit out the most likely scenarioâone minute later.
But on June 13, 2008, a magnitude 6.9 shockwave hit northern Japan, killing at least thirteen people, destroying homes and factories throughout the region. The real-time system did signal that a powerful jolt was happeningâroughly three and a half seconds after it startedâbut the source of the quake was too close to be of any use to places like Oshu, which was only eighteen miles (30 km) from the epicenter. People there received 0.3 seconds of warning. The unfortunate reality is that those closest to the strongest level of shaking will always be the ones who receive the shortest notice. Even if the system works exactly as it should, a real-time warning system will benefit primarily those farther away. On the other hand, it could stop or slow the spread of fires and speed the arrival of emergency crews. So in Japanâat least for the foreseeable futureâthe supercomputer and the six wise men still have a job to do.
Not only was the Parkfield earthquake a dozen years late but the densely woven grid of seismographs, strainmeters, lasers, and other equipment that made the area one of the most closely watched rupture patches in the world had apparently failed to spot any obvious symptoms or definite precursors. In 1934 and 1966 the Parkfield main shocks had been preceded by apparently identical magnitude 5 foreshocks, each about seventeen minutes prior to the magnitude 6 main event. But not this time.
In 1966 as well, the fault had seemed to creep a bit more than normal in the weeks before the failure. There were reports of new cracks in the ground and a water pipe crossing the zone broke the night before the rupture. Nothing like that happened before the 2004 event. No obvious foreshocks or slip before the main event. Seven “creepmeters” were deployed along the rupture zone with nothing to show for the effort. But all was not lost according to Allan Lindh, who in early 2005 wrote an opinion piece for
Seismological Research Letters
defending the work at Parkfield. His paper sounded a new rallying cry for prediction science.
Looking closely at
where
the break occurred, how strong it was, and the aftershock pattern that followed, he argued that a key part of their original prediction
had
come true. What happened in 2004 was physically “a near-perfect repeat” of the 1966 event, according to Lindh. The same earthquake happened againârupturing the same fifteen-mile-long (25 km) segment of the San Andreas between the same two little bends or “discontinuities” in the rock and with the same overall magnitudeâwhat some had called Parkfield's signature or “characteristic” earthquake. One might have expected the magnitude to be greater than 6 because the jolt was twelve to fifteen years later than expected and therefore had more time to accumulate strain in the rocks. But the magnitude was 6, just like its predecessors. Hence, Lindh argued, it was a repeat of the same event.
One curious twist was that the 1966 event had ripped the fault from north to south while this time it unzipped from south to north. And
according to Lindh, there may have been a “small premonitory signal” at three or four Parkfield strainmeters. Holes had been drilled hundreds of feet down into the fracture zone in the late 1990s and extremely sensitive instruments capable of detecting very small increments of stress had indeed recorded signals “of the order of 10 nanostrain,” if all the devices were working properly.
While this sounded like an infinitesimally small thing to measure, Lindh pointed out that if the coming quake had been a magnitude 7 instead of a 6, then the amount of strainâand the creep along the faultâwould probably have scaled up by a factor of ten, which “would be easily observable with current downhole instrumentation.” His point was that new state-of-the-art strainmeters can detect things even those magical GPS rigs cannot see from above.
When a fault creeps way down deep, not all the horizontal motion is transferred to the surface because rocks bend and deform under stress. Therefore, if the rocks started to move hundreds of feet below ground, and if this turned out to be a reliable symptom of a coming rupture, the GPS stations up at the surface might not detect the signal even at the supposed magnitude 7 level. But the much more sensitive strainmeters
could
âor might.