The Universal Sense (17 page)

Read The Universal Sense Online

Authors: Seth Horowitz

BOOK: The Universal Sense
11.29Mb size Format: txt, pdf, ePub

The lack of normal emotional undertone in a robotic voice not only makes it harder to relate to but also contributes to the annoyance most people feel when stuck dealing with one, even if it does provide the same information a human being would. On the other hand, think about the immediate comprehensibility of the non-verbal sounds coming from a favorite movie character, R2-D2 from
Star Wars.
R2D2ese was created by legendary sound designer Ben Burtt using filtered baby vocalizations for some sounds and completely synthesized boops, beeps, and arpeggios for others. Whether the fictional droid was excited, upset, happy, or sad, the audience had no doubt what was going on in its CPU even though the sounds were completely devoid of linguistic content and formed entirely from prosodic tone structure and context. But prosodic emotional comprehension is a
part
of human language and communication and is at least partially language-specific (R2D2’s vocalizations were in fact generated by English-speaking people, although some of my native-Russian-speaking friends claim they had no problem understanding what was meant).

This is why prosody is not a universal language—the bird chirping that makes you feel so relaxed because it reminds you
of a spring morning may in fact be coming from a robin who is really pissed off that a bunch of cowbirds have invaded her nest. But it raises some interesting evolutionary questions regarding human communication. Was prosodic sound a precursor to the first human language? There are some universal elements common to all sonically communicating species, such as loudness for emphasis, lower pitch for implication of size or dominance, faster tempo for urgency. And despite linguists’ habit of dividing languages into the categories of tonal (such as Mandarin) and non-tonal (such as English), even the basic structure of words, phonemes, shares common acoustic elements between languages based on the underlying biology of how we make sounds. A recent study by Deborah Ross and colleagues showed that if you examined the frequency distribution of phonemes uttered by male and female speakers in both Mandarin and English, similar patterns emerge. Whether in a tonal language or a non-tonal one, speech sounds most commonly organize around twelve tone or chromatic interval ratios—more commonly known as the twelve steps in a musical scale. The very basis of even non-prosodic human language has its roots in the mathematical underpinnings of sound. The question that arises is, how do these mathematical relationships tie together sensation, emotion, and communication? For that we turn to one of the hardest things science has ever had to face: music.

Chapter 6
Ten Dollars to the First Person who Can Define “Music” (and Get a Musician, a Psychologist, a Composer, a Neuroscientist, and Someone Listening to an iPod to Agree …)

About the time I started working on this book, I was contacted by Brad Lisle of Foxfire Interactive to see if I would be willing to be a science consultant for a 3-D IMAX film about sound, titled
Just Listen.
The very idea of this blew me away—how do you take such an immersively visual medium as a 3-D IMAX film and make it focus on something as non-visual as sound? But Brad has remarkable ideas about education and interaction. He wants to teach people about the sonic world they are immersed in by providing a medium that would refocus their attention. His initial interest in me was due to some of my work in visualizing how bats perceive the world. I found that if you created an animated world with virtual objects made of crystal-like glass material and swapped out the usual lighting parameters for sonic ones, you could create a world of acoustic glints that formed recognizable 3D shapes as the viewer and
objects moved in it. This gave us a handle on how bats perceive the world with their ears, translated into the more common human motif of vision.

But as much as I love them and as common as they are, bats are not really something that most people think about. And most people who study sound tend to take very mathematical physics-based or psychological-neuroscience approaches. Most audiences who go to even a science-based IMAX film are not really there to learn about near-field effects or frequency-dependent acoustic spaces; they are there to have an experience. Among all the elements Brad and I discussed for the movie, ranging from animal communication to the sounds of human spaces, something was needed that would tie everything together as soon as the movie started. And of course, the one thing that could tie it all together, the one type of math that everyone gets, the one complex neural phenomenon that we all appreciate, is music. But the film couldn’t use just any music—both the music and musician had to be not just spectacular but also able to teach something about the very nature of music and the mind. So in August 2011 my wife and I went to Vancouver to work with the
Just Listen
team in recording a remarkable musician, Dame Evelyn Glennie.

For those of you unfamiliar with Glennie’s work, you’ve missed something truly remarkable and beautiful. Glennie is a world-renowned percussionist and the only solo percussionist of the last century. When most people think about percussion, they tend to think about it either as just providing the beat for real music or in terms of the aggravation and headache that usually follow when you are stuck in a rock concert for the obligatory ten-minute drum solo that everyone but the drummer’s mother could have lived without. But Glennie owns and plays almost two thousand different instruments, from recognizable
classics such as marimbas and xylophones to custom-made and haunting oddities such as the waterphone. What I was fascinated by as I watched her play a short piece she had written was the fact that she wasn’t playing the instrument so much as she was playing the room itself. Walking barefoot up to the six-foot-long concert marimba, she positioned herself and the instrument carefully, then rose up on her toes, tilted her head back, and with four mallets struck the first notes and made the whole room
ring
.

Seated curled up on the edge of the stage, trying to avoid knocking over nearby equipment, I felt the stage tremble and walls of sound fill the space and bounce back like a tidal wave toward the source. In the few seconds before her next notes, everyone could hear the strings of the unplayed grand piano positioned behind her resonating with the force of the near-field sounds from her strike. It was as if she had created a sonic sculpture that changed over the course of the first few seconds. It reminded me of what a fellow band member had said to me years ago: “You can’t record music; you can only save CliffsNotes of it. You have to be in the room with it or you’re just using it to fill the empty spaces between your ears.” As Dame Glennie launched into the remainder of the piece, using her whole body to play the instrument, but always with her bare feet in solid contact with the floor, her head thrown back, exposing her neck and body to the vibrations from the marimba, I realized that I was in the presence of someone who personified the complexities that science has with dealing with music. Evelyn Glennie was filling a space with music.

Oh, and by the way: Glennie is mostly deaf.

Literally thousands of years of research into sound and music, starting with Pythagoras’s study of the mathematics of musical
intervals through the most current fMRI neural imaging study, treats music as an auditory phenomenon. Neuroscientific and psychological studies examine how music is detected by the ears, perceived and processed by the brain, and finally responded to by the mind. So if all that is true, how does Dame Glennie not only create music but listen to it? Because her definition of music and sound is different from anything you may read in a scientific paper. When I asked her a simple question, one that is capable of starting fistfights at scientific conferences—“What is music?”—her reply was that music was something that you create and listen to with your whole body, not just through your ears.

Being the science consultant in this type of situation gives you a certain leeway. While everyone else was positioning 3-D cameras, hoisting microphones, adjusting power supplies, and generally making sure that the next recording would be just so, I set up my own equipment. I knew that Glennie’s deafness was not complete but that she had very little high-frequency information, so I set up an experiment. I placed two regular PZM boundary microphones on the stage out of range of the camera and where she wouldn’t step on them. These types of microphones are often used to pick up the sound of the whole stage area during live recordings and can pick up sounds to about 22,000 Hz, slightly beyond the upper range of human hearing. This may seem like overkill, since the range of most musical instruments tops out at about 4,000 Hz (which is, interestingly, also the upper limit for human auditory hair cells to phase lock), but as anyone who has listened to music through cheap, blown-out speakers will tell you, without those high-frequency components, music sounds dead. Next to them, I placed two geophones, seismic microphones that only pick up low-frequency cord both the sound range heard by people with normal hearing and a simultaneous version restricted to low-frequency impacts and vibrations, similar to what Glennie heard.

(
Figure A
)
Sound recording of marimba piece. Spectrogram showing the frequency response (vertical axis) over time (horizontal axis) of the PZM microphones over a range from 0 to 22 kHz. Horizontal dotted line shows the 2 kHz cutoff range from the geophone recordings.

(
Figure B
)
Seismic recording of the same marimba piece. Spectrogram showing the frequency response over time for the geophones, with a cutoff at 2 kHz.

Looking at the figures above, you can see the difference between the same piece of music recorded using a microphone that mimics the pickup of the human ears (Figure A) and the same recording done with the seismic microphones that are only showing distortion from interacting vibrations above a few hundred Hz (Figure B). While in Figure A you can make out the harmonic structure of the music within the regular vertical banding and the tempo of the piece through the brightly colored regions of the mallet strikes, you can see that there is a “cloud” of reverberation around each note played, formed by the reverberations from the individual notes filling the room, bouncing back and changing. This is the level of complexity that we hear with live music or well-done studio post-processing. But if you look at the spectrogram of the geophone recordings (Figure B), you can spot the individual mallet strikes as geometrical shapes with clean spaces around them, going only up a tenth as high as the acoustic recording. You can almost see the musical score in the percussive strikes.
29
If you compare the two types of recordings, the first one is obviously the acoustically rich one, the one our brains have evolved to perceive, decode, and translate into emotional responses, the one we would call “musical.” Yet the lower one is closer to what the source, Dame Glennie the musician, actually perceives. While most of us sit there listening with our ears, perhaps (like me) with our eyes
closed, she is picking up vibrations from the stage through her bare feet, near-field waves of sound from the resonators striking her legs and lower body and her neck, feeling the feedback from her hands and arms up through her body, some of it even resonating her skull, providing low-frequency sound through direct bone conduction. Glennie is using her whole body to detect vibration, and this is what she—accurately—calls music. Right here is the heart of the conflict that has always existed between science and music: the question “What is music?”

One of the biggest problems facing scientists who study music lies at the very heart of science itself. Science is about observing phenomena, questioning them, forming and testing hypotheses, and following up those that are successful and (if we’re honest and not too politically funded) dumping those that are not. But at the start of any hypothesis is the need to form an operational definition. What are you testing? What are you studying? What parameters can you change without changing what you’re studying? Music is notoriously elusive in this way. The title of this chapter is actually derived from how I start my lectures on music and the brain, and even though it sounds like I’m being a smartass, I’m not. I’ve been a musician, composer, sound designer, producer, and scientist and even I have trouble coming up with a definition that will satisfy more then two or three parts of my past and present.

Other books

Edith Layton by The Return of the Earl
Low Country Liar by Janet Dailey
Atlántida by Javier Negrete
Passion by Jeanette Winterson
Eyes of the Calculor by Sean McMullen
Into You by Sibarium, Danielle
Double Vision by Colby Marshall