The Universal Sense (5 page)

Read The Universal Sense Online

Authors: Seth Horowitz

BOOK: The Universal Sense
6.81Mb size Format: txt, pdf, ePub

These sounds are not heard by humans and are the unheard song of the Eiffel Tower. In large part this is because humans hear airborne sounds. Sound travels differently depending on the density of the medium. While the rigidity of the tower’s structural members damps higher-frequency sounds, the iron’s high density allows sound to move fifteen times faster than in air, which also means any vibration would travel fifteen times farther than it would in air, reflecting and reverberating within the entire structure. So any hard tap would essentially “ring” the entire tower briefly before being swallowed up in the general hum of other vibrations. But it was only by using seismic microphones that recorded from about 0.1 Hz to 20 Hz, the acoustic realm of earthquakes and landslides, and pitch-shifting those sounds back into our auditory range, that we were able to hear the tower breathe and moan, shudder and sway as if it were
a living organism, reacting to the things that crawled on it and the winds and rain that blew through it.

The Eiffel Tower may seem like a special case, and it is certainly one of the most interesting sonic spaces I’ve ever visited, but all spaces and places have acoustic lives of their own based on their shape, their construction materials, what they are filled with, and, mostly, what sources of sound and vibration they are near. While most auditory research takes place in clean, well-lit laboratories with soundproof booths and nicely calibrated equipment that would make an audiophile drool, the rest of the world, where we do all our hearing, is filled with complex sounds that are composed of more than one frequency and vary in amplitude and phase over time scales both short and long. The range of frequencies that most humans can detect runs from the very deep bass of 20 Hz up to the screechingly tinny high end of 20,000 Hz (or 20kHz). Each frequency has its own wavelength (the length of single complete cycle of a sound). This has important implications for how sound will change in a space. If we know how fast sound travels in a given medium, it’s easy to calculate the distance one complete wave or cycle takes up. The very low frequency of a foghorn at 100 Hz has a wavelength of 3.4 meters (about 11 feet), whereas the ultra-high biosonar of a bat at 100kHz has a wavelength of only about one-third of a centimeter (an eighth of an inch).

Here’s a question: why are foghorn sounds so low-pitched? Consider the logistics. You have to get a signal from shore to something really far away, whose position is unknown and which probably is not visible, to prevent an unpleasant intersection of ship and rocks. You need a sound to accomplish this (though we also have lighthouses, of course), but why low frequencies? Because the lower the frequency, the longer the wavelength; if
your wavelength is very long, the pressure changes will not be very affected by small things in its path. All the energy that goes into a single cycle is extended over a distance larger than anything likely to be in the way, and so low-frequency sounds can travel farther. A higher-frequency sound, such as a bat’s biosonar, will have a much shorter wavelength and be more likely to bounce off, be absorbed by, or be otherwise distorted by smaller objects, so it won’t travel as far. (Making lots of high-frequency sounds lets bats get echoes from very small objects relatively close to them, then proceed to eat them).

But echoes are not just for bats. My first remembered introduction to echoes was when I was a young child hunting for fossils in the Catskills while my parents were busy doing uninteresting grown-up things. After I’d dug messily through enough dirt to uncover several tyrannosaurs, my mother, down the hill and across from a large stone outcropping about a quarter mile mile away, began calling my name, and I heard her voice repeat, getting quieter and quieter with each repetition. My early wonder at the miracle of acoustic reflection was spoiled when she yelled, “Get down here now!” with only the emphasized “now” repeating itself. I hastened to obey, since clearly my mother had mastery of hidden powers; after all, she’d made the rocks repeat her messages. But in fact it was just her facing toward the very dense, relatively flat, and hence acoustically reflective rock wall that reflected her voice in a series of quieter and progressively more distorted versions. The sound traveled about a quarter of a mile away and back (about 2,600 feet), so at the speed of sound it took about one-third of a second for the first echo to reach me, losing energy in the higher frequencies, then hitting the slope of the rocky grade behind me and bouncing the sound both upward and back to the distant rock wall. This both decreased
the intensity of the subsequent echoes because of scattering loss and attenuated the higher frequencies even more, until the directly heard “now!” overwhelmed the last of the reflections as they succumbed to the ambient noise even in the quiet mountain area.

But echoes are only one part of how objects change sounds. If you are reading this book in your bedroom, a train, a bus, or a classroom, every surface in that space—tables, walls, even other people—can and will change any sounds that are generated within earshot. Any surface that sound can strike will change it in some way, and the materials that make up or cover those surfaces will change the sound in a unique fashion. Even the simplest sound played in an uncluttered room, ten feet in any dimension with relatively bare walls, will generate thousands of overlapping echoes. And since each frequency has its own wavelength and hence changes in phase and amplitude as it strikes the surfaces, these echoes interfere with each other, sometimes making certain frequencies louder via constructive interference or quieter by destructive interference.

The summation of all these complex changes to the original sound is called
reverberation
and involves not only the tens of thousands of individual echoes but also the damping and amplification caused by constructive and destructive interference from anything in the space. Hard tiled walls and floors (such as in a bathroom, whose acoustic qualities may mask some small irregularities in your otherwise perfect shower-singing voice) are highly reflective, making a room rich in reverb but potentially sonically muddy. If the floor is carpeted or a ceiling has soft, irregularly surfaced acoustic tiles, the energy from the vibrating air molecules is more likely to be absorbed than reflected. This is the basis of sound attenuation in places such as offices; in
locations where sound quality is critical, such as recording studios or anechoic rooms, people will often go one step further and place geometrically patterned soft foam on the walls to further increase sound absorption. You can see a simple example of this at home. If your stereo is in a room with hard floors, try placing a rug just past the speakers; you will be able to hear a muffling or damping of the sound as you lose not only the sound bouncing off that part of the floor but also the sound from all other areas the floor would have reflected it toward. Then try moving chairs in front of the speakers. You should hear a decrease in the higher-frequency sounds, as the lower-frequency sounds bend successfully around the chairs but the higher-frequency ones bounce off them. This gives you an idea of the acoustic signature of a space and some idea why proper speaker design and placement is in fact an art. Getting the best sound requires not just the best sound reproduction but an understanding of how acoustic energy flows toward the listener’s ears.

Despite all the complexity, we are really good at figuring out information about a space based on these complicated acoustic signatures. But we are really bad at figuring out simple things such as distance based on the delays from echoes. I co-taught a course on psychoacoustics for a number of years and was always amazed at the results from a demo I did in class. I play two types of recorded sounds, a single musical note from a piano and then a scale followed by a chord. The first time I play them, I add simple echoes, with different delay lengths between the sound and the echo as well as a reduction in loudness based on normal losses from spherical spreading. The second time around, I modify the sounds algorithmically to create complex simulations of spaces of different size and contents. I then ask the students
to estimate the simulated distance from the piano to the reflecting wall, as well as the size and contents of the simulated space.

Now, figuring out the distance represented by a single echo should be easy, because all you really need to know is the duration of the interval between the initial sound and its echo; the rest is very easy math. It’s a bit like counting the seconds between a visible flash of lightning (moving at about 186,000 miles per second) and the subsequent sound of thunder (moving at the relatively poky one-fifth of a mile per second). Then, as your parents likely told you, you just divide by five to tell you how many miles away the lightning was. While this was probably done to allay your fears of lightning and thunder, this simple rule of thumb not only introduced you to the world of physics but taught you how to do consciously what your ears and brain already do automatically—figure distance from danger.

So, getting back to the audio demo, if sound travels at 1,000 feet per second and a sound has to travel to a reflecting surface and back again, you should be able to estimate the distance to the reflecting wall pretty simply. On the other hand, figuring out the parameters for spaces ranging from wooden coffins to Bryce Canyon just using reverberation ought to be horrendously difficult. And yet invariably, year after year, my students could barely tell which echo is from a more distant surface than the preceding one but usually were about 80 percent right in identifying the size, shape, materials, and contents of the various simulated spaces. These findings seem counterintuitive; figuring out the distance to the echoing surface is simple arithmetic, whereas computing the contents of an unknown space and even being able to figure out if there are bodies in the metal chairs or if the
room is empty (something more than half of the students get right) requires millions of calculations of time and frequency characteristics so detailed that they would likely tie up a small network of computers for a significant amount of time. But again, your ears and brain, exposed on a daily basis to this complex acoustic world, carry out those computations and pass that information to your conscious mind in only a fraction of a second of listening. (While we humans are rather miserable at echo distance calculations, bats probably find the task trivial, since if they don’t get it right quickly, they don’t eat and tend to fly into trees—or, worse, researchers carrying nets).

Our ability to determine what a place sounds like often figures into the field called architectural acoustics—designing places to sound a specific way. Buildings, particularly ones with large open spaces such as airports and cafeterias, often are loud and so reverberant that the noise level makes it hard to even hear someone right in front of you. For example, the Main Concourse of Grand Central Station in New York City is a beautiful structure with enormous open spaces. The vaulted ceilings are 40 feet high and the walls are faced in granite, marble, and limestone. The result is that the Main Concourse becomes a giant very low-fi acoustic reflector for the sounds people generate and for the low-frequency rumble of the trains and traffic in surrounding areas (which are propagated at high speed along the steel and stone infrastructure). This makes the annual holiday concerts held there, not to mention announcements of train arrivals and departures, very difficult to make out. But even in an acoustically messy place such as this, there are sweet spots. If you descend one level into the low ceramic-lined arches across from the Oyster Bar restaurant and whisper facing the wall, you can
be heard amazingly clearly on the other side of the passageway. This is because the hard, curved surface acts as a waveguide for quiet sounds directed along the surface, rather than scattering those sounds. Hence the name “whisper arches.”

But turn-of-the-century train terminals were not typically designed around their sonic properties. Architectural acoustics is a huge field representing a multi-billion-dollar industry that is seen as increasingly critical for quality of life, particularly in urban settings. At the most basic level, offices, hotels, and even home builders spend a total of hundreds of millions of dollars a year on trying to limit internal noise pollution. Every concert hall, theater, or auditorium—anyplace that is home to something important to listen to—has most likely been designed to maximize the sounds you should hear, damp others, and minimize distortion (or has suffered from a lack of such design). The field of architectural acoustics is not a new one—the ancient Greek amphitheater at Epidaurus, from the fourth century BCE, was a marvel of acoustic engineering. Even though it was an open-air structure with no sound system, it is still legendary for the quality of its acoustics, and its acoustical qualities were never duplicated in its time. The underlying basis for its success became understood only in 2007 when Nico Declercq of Georgia Tech and engineer Cindy Dekeyser showed that the physical shape of the seats acted like an acoustic filter, holding back low-frequency sounds and allowing the higher-frequency sounds to propagate. In addition, the use of limestone, a porous stone, absorbed much of the random locally generated noise. (The widespread use of limestone in buildings in Venice is, along with the lack of vehicular traffic, a main reason the city is so quiet.)

More recently, eighteenth- and nineteenth-century concert
halls, with their huge open spaces, hanging draperies, and baroque ornamentation, often were carefully designed to maximize the flow of sound from the stage to every region of the audience. The varying sizes of the theaters’ elaborate decorations created a rich acoustic environment in which the reverberations added to the sound rather than muddying it up. But with the adoption of cleaner geometrical lines in twentieth-century architecture, many spaces started to suffer from acoustic dead spots and muddy sound. I’ve heard stories from friends of going to concerts in New York’s Philharmonic Hall (later called Avery Fisher Hall) and being unable to hear anything but the woodwinds or the strings. It took a complete structural renovation under the guidance of the late Cyril Harris, a master acoustical engineer, to be able to overcome the limitations of the original architectural cleanliness and oddly oriented seats to make the hall worthy of the type of music played there.

Other books

Diving Into Him by Elizabeth Barone
By Queen's Grace by Anton, Shari
The Teacher Wars by Dana Goldstein
Wedding Cookies by George Edward Stanley
Shadowed Ground by Vicki Keire