Read A Natural History of the Senses Online
Authors: Diane Ackerman
Over the years we’ve tried to teach many different kinds of mammals to speak the way humans do, and though some small success has been reached with primates, dolphins, and harbor seals, we haven’t had much real luck. Our ability to speak is special. We can talk for the same reason we choke so easily: Our larynx lies low in the throat. Other mammals have a voice box high in the throat, so that they can continue breathing while they eat. We can’t. Remember the ventriloquist’s greatest feat? Appearing to drink water and make his dummy talk at the same time. When we swallow, food slides past the trachea; if it catches there, it blocks air to the lungs. Many of us choke every year, and there’s no one who doesn’t know the sensation of almost choking. “It went down the wrong pipe,” we gasp, perhaps lifting our arms over our head to open the airway
wider. The Heimlich maneuver uses air stored in the lungs to pop the trapped food back out of the trachea. Just consider what a bad design feature this was for us. In the course of evolution, speech must have been so crucial to survival that it was worth the risk of choking.
Even if other mammals had a low larynx and a tongue in the position that would allow them to make the identical sounds we make, they would need a special part of the brain, called Broca’s area, to process speech the way we do. My last answering machine had a computerized voice that gave me directions and told me what calls had arrived. I named him “Gort” after the robot in the old Michael Rennie sci-fi movie
The Day the Earth Stood Still
, because his overly flattened male voice—half zombie, half butler—sounded like an outtake from the movie. Whenever there was a power surge, Gort’s logic got scrambled and he became so unreliable that I finally had to retire him. My new machine, whom I call “Gertie,” speaks to me in an even flatter but female voice, which sounds uneducated and sluttish. In action, both Gort and Gertie sound subservient and unthreatening, and I suppose the manufacturers feel that’s a plus. In the cockpits of large airplanes, I’ve heard the
annunciator’s
computerized warning—almost always a slightly sultry woman’s voice
*
—saying such urgent things to the pilot as
“Fly up! You’re too low. Fly up! You’re too low,”
or reminders such as
“Your flaps are down.”
The synthesized cockpit voices sound a little more lifelike because they have inflections and modulations, but computer voices in general still sound artificial. I’m sure that will change one day soon, and we’ll chat amiably with articulate computers like Hal in Arthur C. Clarke’s
2001.
It’s only taken so long because speech is more complex than the sum of its parts. We can feed the word “top” into a computer as
t-ah-p
, but who speaks as clearly as a BBC announcer? Yet we’re able to understand people talking so fast that the phonemes blur, so slowly that they drawl, in different tones, at different pitches, and with different accents. One man’s
park
is another man’s
pahk
. We make sense of one another with amazing agility, although we do occasionally have to work at it. As hard as it is for many native English speakers to understand Shakespeare’s English, it’s equally difficult for an American of one region to understand an American from another, since dialects are, in part, changes in the pronunciation of familiar words. Once when I was in Fayetteville, Arkansas, I asked my host if there were any spas around. I knew of the famous Hot Springs in the southern part of the state, and I thought visiting it might be a pleasant way to spend an afternoon. “Spas?” he said in a thick Arkansas accent. “You mean Russian agents?”
One fall semester a few years ago, I accepted an appointment as a visiting professor at a college in a small leafy town in Ohio. The only visiting faculty housing was a suite in a sophomore boys’ dorm, whose residents found a woman living in their midst—however discreetly—too much of a temptation. It was still brutally hot in Ohio, but almost every night someone crept up to the fusebox outside my door and threw the circuit breakers, so that my air-conditioning and all other electrical appliances loudly stopped; when I opened the door to reset the fuses, I heard scurrying and giggling down the hallway. Whenever I passed the peephole in my door, I saw an eye staring back in at me, so I covered the hole with masking tape. Twice I woke up to see a young man hanging upside down in front of my living-room window while he illegally spliced into my television cable, reducing my signal to sand. And, without fail, at nine every morning an Armageddon of heavy-metal rock began that lasted well into the night. The one sure thing I learned about sophomore boys is that they’re all decibel and testosterone. Not only did their stereo music throb through the walls, it was physically painful to walk down the hallway toward the torture-level noise, and knocking on a door meant removing one hand from over an ear. The door usually opened onto a smoky room in which girls were quickly rearranging themselves and liquor or drugs hurriedly disappearing.
The diabolical noise didn’t seem to bother any of them. At that volume, it was barely decipherable as music. In part, they were prematurely deaf, as frequently happens these days among loud-rock addicts. But many teenagers like to listen to music played at such high and distorting levels that it ceases to be anything but loudness. I think the loudness must excite them in an erotic way. Unfortunately, hearing can be permanently destroyed by loudness. Researchers have taken photographs of cochlear hair cells irrevocably damaged after only one exposure to a very loud noise.
*
Playing a ghetto-blaster at full tilt on a calm afternoon in a quiet retreat, or on the streets of a busy city, is probably more an act of aggression and dominance than of love for music: anyone within earshot will have his personal territory invaded, his peace of mind slit open.
Arlene Bronzaft, a psychologist, discovered that exposing children to chronic noise “amplifies aggression and tends to dampen healthful behavior.” In a study of pupils in grades 2–6, at PS 98, a grade school in Manhattan, she showed that children assigned classrooms in the half of the building facing the elevated train tracks were eleven months behind in reading by their sixth year, compared to those on the quieter side of the building. After the N.Y. City Transit Authority installed noise abatement equipment on the tracks, a follow-up study showed no difference in the two groups. Parents don’t stop to worry about which side of a building their child is going to be sitting on, and yet an eleven-month retardation in the course of only four years of school is disastrous. A child would have to struggle hard to catch up. And we wonder why kids can’t read, we wonder why the drop-out rate is so high in New York. Jackhammers, riveting, and other construction noises are part of what we associate with life in big cities, but by hanging steel-mesh blankets over the construction site to absorb sound it is possible to erect a building quietly. As civilization swells, even sanctuaries in the country could
become too clattery to endure, and we may go to extremes to find peace and quiet: a silent park in the Antarctic, an underground dacha.
“Without the loudspeaker, we would never have conquered Germany,” Hitler wrote in his
Manual of German Radio
in 1938. When we think of noise, we picture loudspeakers, radios that sound like front-line armaments, subways thundering and rattling. What is noise? Is it simply random, pain-level sound? Technically, noise is a sound that contains all frequencies; it is to sound what white is to light. But the noises that irritate us are sounds loud or spiky enough to be potentially damaging to the ear. Because a loud noise grates on our psyche, or actually hurts, we want to get away from it. But there are also nonthreatening sounds we just don’t like, and we tend to classify them as noise, too. Musical dissonance, for instance. In 1899, when audiences first heard Arnold Schönberg’s revolutionary “Transfigured Night,” they thought it closer to organized noise than to music.
Noisy!
one passenger yells to another across the narrow aisle of a small commuter plane, like the Metroliner or Beech 1900, as the props burr, acute as a dentist’s drill, and then become a denser throbbing near the bone. When someone scrapes his fingernails across a chalkboard, we twitch and convulse. So many people around the world get the willies when they hear that blackboard sound that it must not be simply a learned response, but something biological. Neurologists have suggested that it may be a relic of our evolution, when shrieks of terror alerted us to sudden doom. Or perhaps it’s too much like the sound of a predator’s claws skidding gently along the rock just behind us.
At the peak of our youth, our ears hear frequencies between sixteen and 20,000 cycles per second—almost ten octaves—beautifully, and that encompasses a vast array of sounds. Middle C is only 256 cycles per second, whereas the principle frequencies of the human voice are
between 100 cycles per second for males and 150 for females. As we age and the eardrum thickens, high-frequency sounds don’t pass as easily along and between the bones to the inner ear, and we start to lose both ends of the range, especially the high notes, as we may discover when we listen to our favorite music. Humans don’t hear low frequencies very well at all, which is merciful; if we did, the sounds of our own bodies would be as deafening as sitting in a lawn chair next to a waterfall. But, even though we may be limited to a certain range of hearing, we’re skilled extenders of our senses. A doctor listens better to a patient’s heart with a stethoscope. We hang microphones in unlikely places: beneath boats to record whale songs, inside the body to record blood flow. We “hear” from the deep reaches of space and time by means of radio telescopes. Bats and bottlenose dolphins have evolved ingenious uses of sounds that are inaudible to us, and which we later invented. Doctors often rely on a form of echolocation, known as ultrasound and consisting of over 20,000 cycles per second, to help diagnose tumors. The first view a pregnant woman gets of her baby is usually an ultrasound picture. Engineers use ultrasound to test the flyability of airplane parts. Jewelers use ultrasound to clean precious gems. Sports medicine uses ultrasound to help heal sprains. And, of course, the Navy uses echolocation in submarines, though they call it sonar. You can buy a flea collar for your dog or cat that uses high-frequency sound waves to annoy fleas and ticks so that they’ll vacate your pet, who supposedly doesn’t hear the siren any better than you do. We may say “I’m all ears,” but we tend to cock our heads or cup an ear with one hand to help out, and, when hearing fades, we aid our ears with resoundingly small electronic speakers. The original hearing aids were as large as lamp shades and only added twenty decibels; now they are small and discreet and much more powerful. But, in amplifying the world, they don’t select what’s meaningful from it, what needs to be heard from the pour of sheer noise.
In a cardiac intensive care unit’s jungle of wires and monitors, small lights blink like the eyes of wild animals, and human hearts reveal their fury in tiny monotonous beeps. When someone’s heart
begins to gabble, alert technicians hear the change and come running. But researchers at Michigan State are proposing more complex and subtle monitors, ones that will produce a series of notes, not just beeps. The changing melody of each heart would offer subtle clues to its condition. Because we’re used to associating the heart with sound, this doesn’t strike us as particularly farfetched. However, the researchers’ other proposed use of sound—to hear chemical abnormalities in a patient’s urine—does, and they’ve borne the brunt of endless jokes about their study of musical pee.
We think of sound as something fey, lighter-than-air, an insubstantial thing, not a force with muscle. But at Intersonics, Inc., in Northbrook, Illinois, they’ve begun using sound to lift objects, in what they refer to as “acoustical levitation.” Most objects up to now have been levitated aerodynamically or electromagnetically. Ultrasound can lift objects, too. Four acoustic transducers, emitting ultrasound waves, are arranged so that they direct narrow beams to a central spot. Where the beams intersect, an invisible stockade is created in which small objects can be suspended. Although the sound is louder than that of a jet engine, adults don’t hear it. While they’re floating, the objects don’t feel any acoustic force, but if they drift to the side of the stockade walls, then the sound police push them back in place. Unaware of their cage unless they try to leave it, the objects seem to float in the abracadabra realm of flying carpets. But it is not a parlor game to industry, for whom this ideal crucible allows them to hold an object in place without touching or contaminating it. Ultrasound beams are powerful enough to heat a small space to the temperature of the sun, or shatter and rearrange molecules, layers of which can be stacked like flapjacks. Scientists are hoping to use ultrasound to create new glasses, including perfectly uniform glass capsules to contain hydrogen fuel in nuclear fusion reactors; brilliant alloy lenses; and fabulous electronics and superconductors. One likely application is manufacturing in outer space. “Ultrasonic levitation furnaces” went aboard the space shuttles in 1983 and 1985. New metal alloys could indeed be made of very high-temperature materials, since there would be no crucible to melt.
John Cage once emerged from a soundproof room to declare that there was no such state as silence. Even if we don’t hear the outside world, we hear the rustling, throbbing, whooshing of our bodies, as well as incidental buzzings, ringings, and squeakings. Deaf people often remark on the variety of sounds they hear. Many who are legally deaf can hear gunfire, low-flying airplanes, jackhammers, motorcycles, and other loud noises. Being deaf doesn’t protect them from ear distress, since humans use their ears for more than hearing. As anyone who has had an inner-ear infection knows, one of the ear’s most important jobs is to keep balance and equilibrium; the internal workings of the ear are like a biological gyroscope. In the inner ear, semicircular canals (three tubes filled with fluid) tell the brain when the head moves, and how. If you were to half fill a glass with water and swirl it in a circle, the water would spin around, and, even after you stopped, the water would continue swirling for a little while. In a similar way, we feel dizzy even after we’ve gotten off of a merry-go-round. Not all animals hear, but they all need to know which way is up. We tend to think of the deaf as people minus ears, but they’re as much prey to ear-related illnesses as hearing people are.