The End of Doom (13 page)

Read The End of Doom Online

Authors: Ronald Bailey

BOOK: The End of Doom
12.91Mb size Format: txt, pdf, ePub

Harvard law professor and former administrator of the Office of Information and Regulatory Affairs in the Obama administration Cass Sunstein agrees: “If the burden of proof is on the proponent of the activity or processes in question the Precautionary Principle would seem to impose a burden of proof that cannot be met.” Why can't it be met? “The problem is that one cannot prove a negative,” notes Mercatus Center analyst Adam Thierer. “An innovator cannot prove the absence of harm, but a critic or regulator can always prove that
some
theoretical harm exists. Consequently, putting the burden of proof on the innovator when that burden can't be met essentially means no innovation is permissible.” Just because I can't prove that no minotaurs roam the woods surrounding my cabin in Virginia, that shouldn't mean that regulators can, as a precaution, ban virgins from visiting me. (Minotaurs are notoriously fond of the flesh of virgins.)

Anything New Is Guilty

Anything new is guilty until proven innocent. It's like demanding that newborn babies prove that they will never grow up to be serial killers or even just schoolyard bullies before they are allowed to leave the hospital. The point of maximum ignorance about the benefits and costs of any activity or product is before testing. If testing is not permitted and people can't gain experience using a new technology, the result amounts to never doing anything for the first time. The precautionary principle clearly is not a neutral risk management tool; it is specifically aimed at bestowing a political veto on new technologies and products onto opponents when one cannot be procured in the marketplace.

Steve Breyman, a professor in the Department of Science and Technology Studies at Rensselaer Polytechnic Institute, has made explicit how he sees the precautionary principle being wielded as a policy tool for radically reshaping modern societies. As he states, “I introduced as part of an overall green plan that included conservation and renewable energy, grass roots democracy, green taxes, defense conversion, deep cuts in military spending, bioregionalism, full cost accounting, the cessation of perverse subsidies, the adoption of green materials, designs and codes, green purchasing, pollution prevention, industrial ecology and zero emissions, etc., the precautionary principle could be an essential element of the transition to sustainability.” It certainly would be good to adopt many of these policies, but his proposals have precious little to do with evaluating the threats to human health or environment allegedly posed by new technologies. Clearly, its boosters do not regard the precautionary principle as just a neutral risk analysis tool; it is a regulatory embodiment of egalitarian and communitarian moral values.

At the heart of the principle is the admonition that “precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically.” Of course,
all
scientific conclusions are subject to revision, and none are ever fully established. Since that is the case, the precautionary principle could logically apply to all conceivable activities, since their outcomes are always in some sense uncertain.

On its face, the strong version of the principle actually forestalls the acquisition of the sort of knowledge that would reveal how safe or risky a new technology or product is. In the case of genetically modified crops, two researchers pointed out, “the greatest uncertainty about their possible harmfulness existed before anybody had yet produced one. The precautionary principle would have instructed us not to proceed any further, and the data to show whether there are real risks would never have been produced. The same is true for every subsequent step in the process of introducing genetically modified plants. The precautionary principle will tell us not to proceed, because there is some threat of harm that cannot be conclusively ruled out, based on evidence from the preceding step. The precautionary principle will block the development of any technology if there is the slightest theoretical possibility of harm.”

Let's parse the principle a bit more. One particularly troublesome issue is that some activities that promote human health might “raise threats of harm to the environment,” and some activities that might be thought of as promoting the environment might “raise threats of harm to human health.”

“The precautionary principle, for all its rhetorical appeal, is deeply incoherent,” argues Cass Sunstein. “It is of course true that we should take precautions against some speculative dangers. But there are always risks on both sides of a decision; inaction can bring danger, but so can action. Precautions, in other words, themselves create risks—and hence the principle bans what it simultaneously requires.”

Sunstein argues that five different common cognitive biases distort how people view precaution when considering novel risks. First, recent news about hazards come more easily to mind, distracting people's attention from other risks. Second, people attend far more credulously to worst-case scenarios, even if they are very unlikely to occur. Third, rather than risk a loss, people tend to prefer the status quo even when there is a high probability that a new activity or product will bestow significant benefits. Fourth, a common belief in a benign nature makes technological risks look more suspect. Fifth, people focus on the immediate effects of their decisions and ignore how competing risks play out over the longer run. Sadly, special interests and politicians understand and manipulate these cognitive weaknesses to inflame debates over the safety of technologies they dislike.

Precaution Is Dangerous

Sunstein poses the case of nuclear power, in which opponents claim that the principle says it should not be permitted, yet banning no-carbon nuclear power increases the risks of climate change caused by burning coal to produce electricity. This is not a hypothetical paradox. After a tsunami caused the Fukushima nuclear disaster in Japan, the German government decided to close all of its nuclear power plants. The country is now building more coal-fired electric power plants to replace them, and as a result, its emissions of carbon dioxide are increasing. Is that really the precautionary choice? On what evidence can one answer that question? It's really all about panic and political pandering. No wonder Sunstein dubbed the precautionary principle the “paralyzing principle.”

Let's consider several other cases in which the precautionary principle presents paralyzing conundrums. Take the use of pesticides. Humanity has deployed them to better control disease-carrying insects such as flies, mosquitoes, and cockroaches, and to protect crops. Some studies have suggested that modern pesticides have helped decrease crop yield losses from wheat, rice, and corn by as much as 35 percent and losses from soybeans and potatoes by 43 percent. Clearly, pesticide use has significantly improved the health and nutrition of hundreds of millions of people. But some pesticides have had side effects on the environment, such as harming nontargeted species. The precautionary principle's “threats of harm to human health or the environment” standard gives no sure guidance on how to make a trade-off between human health and the protection of nonpest species.

Defenders of the precautionary principle also frequently point out that it is not really so novel, since it is already embedded in many United States regulatory schemes. As a prime example, proponents often cite as a model the drug approval process of the Food and Drug Administration, in which pharmaceutical companies who want to sell their medicines to the public must first prove that they are safe and effective. However, there is good evidence that the FDA's ever-intensifying search for safety is achieving the opposite and the agency is now killing more people than it saves.

For example, a 2010 study in the
Journal of Clinical Oncology
by researchers from the MD Anderson Cancer Center in Houston, Texas, found that the time from drug discovery to marketing increased from eight years in 1960 to twelve to fifteen years in 2010. Five years of this increase result from new regulations boosting the lengths and costs of clinical trials. The regulators aim to prevent cancer patients from dying from toxic new drugs. However, the cancer researchers calculate that the delays caused by requirements for lengthier trials have instead resulted in the loss of 300,000 patient life-years while saving only 16 life-years.

Conversely, speeding up drug approvals—using less caution—evidently saves lives. A 2005 National Bureau of Economic Research study found that, on balance, the faster FDA drug approvals made possible by new funding legislation passed in the 1990s saved far more lives than they endangered. In fact, new drugs saved up to 310,000 life-years compared to 55,000 life-years possibly lost to the side effects of drugs that were eventually withdrawn from the market. As the general counsel for the Competitive Enterprise Institute, a free-market think tank, Sam Kazman, has observed, “Whenever FDA announces its approval of a major new drug or device, the question that needs to be asked is: If this drug will start saving lives tomorrow, then how many people died yesterday waiting for the agency to act?” Precaution is killing people.

Or consider the case of subsidies and mandates for biofuels. Biofuels are supposed to protect us against the harms of climate change (by cutting greenhouse-warming carbon dioxide emissions from using fossil fuels) and to reduce the risks associated with US dependence on foreign oil. Nevertheless, the biofuel boom has resulted in farmers plowing land that once had been sequestered in conservation programs and using more fertilizer to produce corn. This has eliminated considerable swaths of wildlife habitat, and fertilizer runoff has created an extensive low-oxygen dead zone in the Gulf of Mexico. In addition, growing crops for biofuels instead of for food likely contributed to the recent hike in the global price of grains and thus increased hunger in poor countries. The precautionary principle offers no obvious counsel on the proper balance of those risks.

One of the first journalistic instances of the use of the phrase
scientific consensus
appears in the July 1, 1979, issue of
The
Washington Post,
in an article on the safety of the artificial sweetener saccharin. “The real issue raised by saccharin is not whether it causes cancer (there is now a broad scientific consensus that it does) [parenthetical in original],” reported the
Post
. This belief was based on experiments in which mice dosed with huge amounts of the sweetener got bladder cancer. Based on this threatening information, the sweetener was listed as a precautionary measure in 1981 in the US National Toxicology Program's
Report on Carcinogens
as a substance reasonably anticipated to be a human carcinogen. Thirty years later, the National Cancer Institute reports that “there is no clear evidence that saccharin causes cancer in humans.” In light of this new scientific consensus, the sweetener was delisted as a probable carcinogen in 2000. In this instance precaution was exercised, just as the principle admonishes, because “some cause and effect relationships are not fully established scientifically.” The result was that offering this safe low-calorie sweetener in the marketplace was substantially hindered just as obesity rates in America were skyrocketing.

If pesticides and nuclear power aren't bad enough, what about the dangers posed by the source of energy that practically defines the modern age, electricity? In 1989,
New Yorker
staff writer Paul Brodeur launched the fear campaign that electromagnetic fields (EMF) generated by power lines, household appliances, television screens, and electric blankets were causing an epidemic of cancer. Following the model of Rachel Carson's
Silent Spring,
Brodeur's
New Yorker
articles became the 1993 book
The Great Power-Line Cover-up: How the Utilities and the Government Are Trying to Hide the Cancer Hazard Posed by Electromagnetic Fields.

By 1997, the National Academy of Sciences had released a report on EMF based on a three-year review of five hundred epidemiological studies that concluded that “the current body of evidence does not show that exposure to these fields presents a human-health hazard. Specifically, no conclusive and consistent evidence shows that exposures to residential electric and magnetic fields produce cancer, adverse neurobehavioral effects, or reproductive and developmental effects.” A year later, the National Cancer Institute issued a seven-year epidemiological study that found no connection whatsoever between exposure to EMFs and childhood leukemia.

As physicist Robert Park later concluded, “The EMF controversy has faded from view. But think of the damage done in the meantime. Hundreds of millions of dollars were spent in litigation between electric utilities and tort lawyers; homeowners near power lines saw a collapse in property values; municipalities had to pay for new electrical work in schools. The real cost, though, is human—millions of parents were terrified to no purpose.”

Remember the cell phone cancer panic? In January 1993, David Reynard of St. Petersburg, Florida, appeared on CNN's
Larry King Live,
where he claimed that his wife died of a brain tumor that developed where she held her cellular phone to her head. Reynard filed a suit against the cell phone's maker and the phone company. Thus was launched the great cell phone cancer scare, which continues to this day. In 2010, the city of San Francisco, California, specifically citing the Wingspread version of the precautionary principle, passed an ordinance requiring radiation warning labels on mobile phones. It took until 2013 for the city to drop its labeling requirement in the face of lawsuits. The National Cancer Institute now states with regard to cell phones that “to date there is no evidence from studies of cells, animals, or humans that radiofrequency energy can cause cancer.”

Consider also how the precautionary principle would have applied to the introduction of cellular phones. The principle mandates that a new technology not be permitted whenever someone alleges that it might potentially cause harm to human health. So it takes little imagination to think about what would have happened to this increasingly important means of communication had the precautionary principle been invoked as the cell phone cancer scare was spreading through the media. In 1993—when Reynard made his claims—10 million Americans were using cell phones and there were probably fewer than 20 million users around the globe. Today, there are 7 billion mobile phone subscriptions worldwide. By the way, Reynard lost his lawsuit.

Other books

Go Big or Go Home by Will Hobbs
A Duchess in the Dark by Kate McKinley
BURN by Suzanne Wright
My Husband's Wife by Amanda Prowse
Hidden in the Heart by Beth Andrews
We Sled With Dragons by C. Alexander London
Risky Shot by Kathleen Brooks