In Meat We Trust (17 page)

Read In Meat We Trust Online

Authors: Maureen Ogle

BOOK: In Meat We Trust
2.76Mb size Format: txt, pdf, ePub

 

To understand why, we need to look briefly at barnyard nutrition. Single-stomach animals like chickens and hogs thrive when their diets contain animal-derived proteins, such as fish meal and cod liver oil, or “tankage” (the byproducts of rendering plants). Deprived of those proteins, animals are more prone to disease and weigh less at maturity. Less flesh on the animal translates into less meat on the table. By the time World War II began, scientists had been studying the mysteries of animal-derived proteins for more than twenty years, and their research had fostered the development of the commercial feed industry. Ralston-Purina and Quaker Oats, for example, manufactured feedstuffs that included fish meal and cod liver oil. But those ingredients were expensive; if scientists could find substitutes, they could help farmers reduce their production costs.

Thus the search to understand why animal-based proteins are more powerful than ones derived from plants, and to identify the so-called animal protein factor (APF) that differentiates fish oil, say, from plant-based proteins. The wartime feed shortage intensified the need for answers, and university, corporate, and USDA researchers (themselves short-handed as scientists and graduate students headed off to war) doggedly conducted feeding trials with sweet potatoes and other plants, as well as vitamins, minerals, amino acids—anything that might replicate the effects of APF. An employee at one agricultural experiment station chided his colleagues for their lack of imagination. There was no time “to repeat feeding trials
for five successive years before conclusions are drawn.” The emergency of war, he argued, demanded “newer and more effective research.” He urged them to follow the lead of biologists, chemists, and geneticists engaged in basic research, especially those studying the fundamental physiological processes of life. How, precisely, did growth happen? What internal mechanism caused plants, for example, to reach for sunlight? “The ultimate objective”
of such research, explained one scientist, was “growth control.” Once humans understood the mechanics of growth, they could manipulate and control it and even encourage “abnormal growth.” That explains the interest in colchicine, a substance derived from the crocus plant. Historically, it had been used to treat gout, but in the 1930s, biologists discovered that it accelerated “evolutionary changes,”
transforming conventional plants into “giant” specimens. Some researchers believed that colchicine could have the same effect on animals. In 1940, a scientist at the University of Pittsburgh injected it into chicken eggs. The birds that hatched grew “abnormally large” combs and wattles, and males crowed three months earlier than usual.

In the late 1940s,
scientists finally unraveled the mystery of APF, arriving there as researchers so often do: by accident and via a circuitous path, in this case research aimed at curing pernicious anemia, at the time a deadly global menace. Pernicious anemia cripples its victims, leaving them too weak to move, and eventually attacks the nervous system. At the time, the only way to alleviate or cure it was with hefty doses of liver—a half-pound or more a day—or injections of liver extract, both of which were expensive. (Nor, it must be admitted, was the prospect of eating a half-pound of liver a day particularly inviting.) But as with APF, it wasn’t clear how or why that cure worked, or which component of liver played the crucial role. Scientists knew that if only they could identify that mystery ingredient, they could design a cheaper substitute. The answer came in 1948 when scientists working at the pharmaceutical company Merck announced that they had isolated liver’s anti-anemia ingredient, which they named vitamin B
12
. A dose weighing less than a single strand of human hair was sufficient to set patients on the road to good health. But even that was expensive: one ton of liver yielded just twenty milligrams of the vitamin. A few months later, scientists at Lederle Laboratories, owned by American Cyanamid, announced that they had extracted the vitamin from common bacteria, and not long after, the Merck group developed a technique for making large quantities at a low price. Merck manufactured antibiotics, bacteria- killing substances that were then relatively new, in enormous vats of fermented microbes. The process generated gallons of waste in the form of organism-soaked residues, and those could be used to make B
12
.

The final step in this convoluted chain of discovery came in 1950. Two researchers at American Cyanamid were testing the impact of B
12
on livestock with the expectation that the vitamin would improve the animals’ health. It did—and then some. To the men’s astonishment, B
12
manufactured from the residue of the antibiotic Aureomycin acted as a superaccelerant. Animals that ate it grew as much as 50 percent faster than animals fed B
12
extracted from liver. Nor did it take much to produce that effect: about an ounce of antibiotic per ton of feed. The implications were obvious. Feeds laced with a synthetic vitamin-and-antibiotic product cost less to manufacture than those based on fish meal or tankage, and livestock that ate it would reach maturity faster, which meant farmers could spend less on feed. For broiler producers like Jesse Jewell, the combination produced a 10 percent trifecta: chickens needed 10 percent less time to reach market weight, they ate 10 percent less feed, and mortality rates dropped about 10 percent. The discovery blew “the lid clear off
the realm of animal nutrition,” noted the editors of a farming magazine, and left “animal nutritionists gasping with amazement, almost afraid to believe what they had found.” Farmers would “[n]ever again”
have to contend with the “severe protein shortages” that plagued them during World War II. From the perspective of both farmers and consumers, antibiotics were as valuable as tractors, combines, and agricultural subsidies.

Enthusiasm for antibiotics and other components of factory farming increased after World War II thanks to three factors. First was the ongoing shortage of agricultural labor. When the war ended, most men and women did not return to the farm. The cold war, a dire need for housing, and the baby boom pushed the economy into hyperdrive. Factory assembly lines, whether in weaponry, building materials, furniture, or automobiles, absorbed record numbers of workers, as did offices, schools, and other non-agricultural employers. American farmers dumped their wartime profits into technologies that replaced human labor.

Second, postwar politics transformed agricultural mechanization into a patriotic imperative. From the 1940s on, American food served as a weapon, first against the Axis enemies, and then in the cold war struggle against communism. U.S. General Lucius Clay, who served as the governor of occupied Germany, summed the equation in blunt terms: one way to “pave the way
to a Communist Europe,” he said, was by forcing citizens of the former warring nations to choose “between being a communist on 1500 calories and a believer in democracy on 1000 calories.” If food, scarce nearly everywhere in the world except the United States, could help win this new war, American farmers must do whatever was necessary to support the cause. There was no time for dallying and no place for laggards, of which, economists grumbled, agriculture harbored entirely too many. Most farmers were “moving forward”
into more mechanized, factorylike farming, noted a reporter summarizing one of the many hearings and investigations into the problem of agricultural “underemployment.” But many persisted in “standing still” and in relying on the “methods of their grandfathers.” By refusing to do their share, they imposed a “heavy burden” on the nation.

The third factor that contributed to enthusiasm for factory farming lay well beyond the chicken coop and battlefield. In postwar America, large grocery chains emerged as major power players in the nation’s food supply system, and factory farming, and especially the integrated broiler industry, was well suited to meet their demands.

 

The ascendance of chain grocery stores can be traced back to the agricultural crisis of the 1920s. As beleaguered farmers glutted the market with their corn, cotton, and cattle, prices of those commodities collapsed, and in theory, consumers should have benefited. Instead, food prices soared, and a baffled public demanded an explanation. Dozens of studies examined the entirety of the American food system, from farm to table. Most analysts arrived at the same conclusion: Basic agricultural foodstuffs were cheap—and farmers weren’t making much money—but consumers were paying high prices at grocery stores thanks to two unrelated factors. The first was consumers’ insistence on convenience. One USDA analyst
pointed out that more women were working outside the home and they had neither the time nor the inclination to spend hours in the kitchen. A contest between, say, a cooked-from-scratch roast and canned beef stew was no contest at all.

But convenience was neither cheap nor free, and the demand for “[t]ime-saving, convenience,
comfort, and satisfaction,” explained a congressional commission appointed to study rising food costs, had “reached a point where it costs more to distribute and serve [food] than it does to produce [it].” The price of store-bought bread, for example, included “a maze of service costs.” Its manufacturer invested in equipment needed to mix and bake the bread and employed an army of salespeople and advertising copywriters to persuade shoppers to buy it. Moving the bread from factory to table necessitated hiring truck drivers, machine operators, packing crews, and deliverymen. Those layers of expense provided jobs and paychecks, but they also drove up the final price of bread and other foods.

But the dismal state of food retailing also contributed to the high cost of eating. In the 1920s, most Americans still shopped for food the same way their grandparents had, buying dry goods like flour and spices at one store; perishables such as potatoes, onions, and apples from another; and meat from a butcher shop. The average food retailer catered to a limited neighborhood clientele and purchased supplies in small lots from multiple food jobbers, each of whom carried a narrow line of goods. One analyst used lettuce to calculate the resulting inefficiencies: Suppose a wholesaler bought a carload of lettuce, or 320 crates. A typical jobber purchased 1/16 of that load; a retailer 1/320; and the consumer “one head or 1/7,680 of a car.”
Each subdivision of the original carload added to the cost of the final product. Worse, complained critics, the grocery business was too often the refuge of the incompetent and the inexperienced. A study of grocers in Oshkosh, Wisconsin, revealed that most of them lacked any experience in retailing, and their numbers included a policeman, a shoemaker, and a musician, which, said one observer, explained why so many of them failed. In Louisville, Kentucky, a third of grocers failed after a year; in Buffalo, New York, 60 percent went under.

These inefficiencies
created an opening for large, centrally managed chain grocers who could drive down costs through volume buying and streamlined distribution. Even before the 1920s, a handful of chain grocers had made inroads into retailing, mostly in large cities and mainly on the East Coast. Chief among them was the Great Atlantic and Pacific Tea Company, or A&P, which began life in the nineteenth century as a purveyor of tea and coffee but whose owners gradually expanded their offerings to include a full line of grocery items. By 1900, A&P operated two hundred stores. The original outlets offered delivery and credit, but not self-service; clerks gathered items for shoppers. But in 1913, A&P launched a collection of “Economy Stores.” The new shops abandoned in-home delivery and other frills in exchange for low prices that company executives believed would generate high-volume sales. Manufacturers of national brand products like canned foods and dry cereals initially objected, arguing that A&P’s policies besmirched hard-won reputations by treating branded goods as cheap stuff. They changed their minds once they recognized that shoppers who patronized modern stores like A&P’s were more willing to try to buy branded goods.

In the wake of the agricultural crisis of the 1920s, those charged with studying and reforming the American food system touted chain grocery stores as a way to modernize and improve food distribution. The chains streamlined the task of shopping by providing an array of foodstuffs, from produce to canned goods, in a single location. A&P and other grocers also emphasized the pleasures of consumption by providing clean, well-lit environments, wheeled carts, low prices, and, thanks to self- service, maximum convenience. But chains made their biggest impact behind the scenes. Unlike neighborhood shops and independent grocers, chains ordered directly from manufacturers and in bulk, which kept their costs low. Food manufacturers benefited, too; by dealing with a single grocery chain rather than hundreds of individual retailers, they reduced bookkeeping and accounting expenses, to say nothing of costs associated with selling and delivery. All of it added up to efficiency that translated into lower food prices, and by the time World War II ended, grocery chains dominated food retailing.

Their dominance of
meat
retailing unfolded more slowly. Back in the 1920s and 1930s, the chains had struggled to learn how to sell meat. As one grocery executive admitted, “[c]hain store merchandising
is founded on control,” and meat, with its unwieldy carcasses, fat, gristle, blood, and bone, resisted control. An A&P executive begged the nation’s packers to make meat behave less like itself and more like easy-to-manage canned peas. “It is now possible
to buy bread already cut in slices,” he argued, so surely it was “logically and economically” feasible to supply grocery chains with precut, prepackaged meats. If only it were that simple. “The packaging of coffee,
crackers, [and] cereals is child’s play compared with packaging of fresh meats,” marveled a reporter in 1929. “A whole new technic [
sic
] must be worked out.” Even when the packaging succeeded, customers weren’t always sure what to do with it. One retailer recounted the day an angry customer marched into his store and demanded a refund for a package of bacon she’d bought three weeks earlier. The meat was spoiled, she told the man. The puzzled grocer asked her where she’d stored it. On top of her icebox, she replied, but that “should be of no significance”
because the bacon was in “a ‘sealed package’ and should not require special care.” (Presumably the woman’s refund included a quick lesson in packaging and refrigeration.)

Other books

Short Stories by W Somerset Maugham
Saving Cole Turner by Carrole, Anne
Four Dukes and a Devil by Maxwell, Cathy, Warren, Tracy Anne, Frost, Jeaniene, Nash, Sophia, Fox, Elaine
Geek Charming by Palmer, Robin
High-Society Seduction by Maxine Sullivan
The Theta Prophecy by Chris Dietzel
Rough Rider by Victoria Vane
The Oncoming Storm by Christopher Nuttall