Read I Can Hear You Whisper Online
Authors: Lydia Denworth
 â¢Â â¢Â â¢Â
Reliable statistics on the number of people in the United States who use ASL don't really exist. The US Census asks only about use of spoken languages other than English. There are more than two million deaf people nationally, of whom between a hundred thousand and five hundred thousand are thought to communicate primarily through ASL. That equals less than a quarter of 1 percent of the national population. People have begun to throw around the statistic that ASL is the thirdâor fourthâmost common language in the country. For that to be true, there would have to be something approaching two million users, which seems unlikely. Anecdotally, interest in ASL does seem to be growing. It is far more common as a second language in college and even high school. After Hurricane Sandy, New York City's mayor Michael Bloomberg was accompanied at every press conference by an interpreter who became a minor celebrity for her captivating signing. But people have to have more than a passing acquaintance with signing to qualify as users of the language, just as many who have some high school French would be hard-pressed to say more than
bonjour
and
merci
in Paris and can't be considered French speakers.
Still, bilingualism is the hope of the deaf community. Its leaders agree that Americans need to know English, the language of reading and writing in the United States, but they also value sign language as the “backbone” of the Deaf world. “
The inherent capability of children to acquire ASL should be recognized and used to enhance their cognitive, academic, social, and emotional development,” states the National Association of the Deaf. “Deaf and hard of hearing children must have the right to receive early and full exposure to ASL as a primary language, along with English.”
The case for
bilingualism has been helped by Ellen Bialystok, a psychologist at York University in Toronto, and the most high-profile researcher on the subject today. Her work has brought new appreciation of the potential cognitive benefits of knowing two languages. “
What bilingualism does is make the brain different,” Bialystok told an interviewer recently. She is careful not to say the bilingual brain is “categorically better,” but she says that “most of [the] differences turn out to be advantages.”
Her work has helped change old ideas. It was long thought that learning more than one language simply confused children. In 1926, one researcher suggested that using a foreign language in the home might be “
one of the chief factors in producing mental retardation.” As recently as a dozen years ago, my friend Sharon, whose native language is Mandarin Chinese, was told by administrators to speak only English to her son when he started school in Houston. It is true that children who are bilingual will be a little slower to acquire both languages and furthermore, that they will have, on average, smaller vocabularies in both than a speaker of one language would be expected to have. Their grammatical proficiency will also be delayed. However, Bialystok has found that costs are offset by a gain in executive function, the set of skills we use to multitask and sustain attention and engage in higher-level thinkingâsome of the very skills Helen Neville was looking to build up in preschoolers and that have been shown to boost academic achievement.
In one study, Bialystok and her colleague Michelle Martin-Rhee asked young bilingual and monolingual children to sort blue circles and red squares into digital boxesâone marked with a blue square and the other with a red circleâon a computer screen. Sorting by color was relatively easy for both groups: They put blue circles into the bin marked with a blue square and red squares into the box marked with a red circle. But when they were asked to sort by shape, the bilinguals were faster to resolve confusion over the conflicting colors and put blue circles into the box with the red circle and red squares into the bin with the blue square.
When babies are regularly exposed to two languages, differences show up even in infancy, “helping explain not just how the early brain listens to language, but how listening shapes the early brain,” wrote pediatrician Perri Klass in
The
New York Times
. The same researchers who found that monolingual babies lose the ability to discriminate phonetic sounds from other languages before their first birthday showed that bilingual babies keep up that feat of discrimination for longer. Their world of sound is literally wider, without the early “perceptual narrowing” that babies who will grow up to speak only one language experience. Janet Werker has shown that babies with bilingual mothers can tell their moms' two languages apart but prefer both of them over other languages.
One explanation for the improvement is the practice bilinguals get switching from one language to the other. “
The fact that you're constantly manipulating two languages changes some of the wiring in your brain,” Bialystok said. “When somebody is bilingual, every time they use one of their languages the other one is active, it's online, ready to go. There's a potential for massive confusion and intrusions, but that doesn't happen. . . . The brain's executive control system jumps into action and takes charge of making the language you want the one you're using.” Bialystok has also found that the cognitive benefits of bilingualism help ward off dementia later in life. Beyond the neurological benefits, there are other acknowledged reasons to learn more than one language, such as the practical advantages of wider communication and greater cultural literacy.
It's quite possible that some of the bias still found in oral deaf circles against sign language stems from the old way of thinking about bilingualism. It must be said, though, that it's an open question whether the specific cognitive benefits Bialystok and others have found apply to sign languages. Bialystok studies people who have two or more spoken languages. ASL travels a different avenue to reach the brain even if it's processed similarly once it gets there. “Is it really just having two languages?” asks Emmorey. “Or is it having two languages in the same modality?” Bits of Spanglish aside, a child who speaks both English and Spanish is always using his ears and mouth. He must decide whether he heard “dog” or
perro
and can say only one or the other in reply. “For two spoken languages, you have one mouth, so you've got to pick,” says Emmorey. A baby who is exposed to both English and sign language doesn't have to do that. “If it's visual, they know it's ASL. If it's auditory, they know it's English. It comes presegregated for you. And it's possible to produce a sign and a word at the same time. You don't have to sit on [one language] as much.” Emmorey is just beginning to explore this question, but the one study she has done so far, in collaboration with Bialystok, suggests that the cognitive changes Bialystok has previously found stem from the competition between two spoken languages rather than the existence of two language systems in the brain.
Whether ASL provides improvement in executive functionâor some other as yet unidentified cognitive benefitâEmmorey argues for the cultural importance of having both languages. “I can imagine kids who get pretty far in spoken English and using their hearing, but they're still not hearing kids. They're always going to be different,” she says. Many fall into sign language later in life. “[They] dive into that community because in some ways it's easy. It's: âOh, I don't have to struggle to hear. I can just express myself, I can just go straight and it's visual.'” She herself has felt “honored and special” when she attends a deaf cultural event such as a play or poetry performance. “It's just gorgeous. I get this [experience] because I know the language.”
Perhaps the biggest problem with achieving bilingualism is the practical one of getting enough exposure and practice in two different languages. When a reporter asked Bialystok if her research meant that high school French was useful for something other than ordering a special meal in a restaurant, Bialystok said, “Sorry, no. You have to use both languages all the time. You won't get the bilingual benefit from occasional use.” It's true, too, that for children who are already delayed in developing language, as most deaf and hard-of-hearing children are, there might be more reason to worry over the additional delays that can come with learning two languages at once. The wider the gap gets between hearing and deaf kids, the less likely it is ever to close entirely. When parents are bilingual, the exposure comes naturally. For everyone else, it has to be created.
 â¢Â â¢Â â¢Â
I didn't know if Alex would ever be truly bilingual, but the lessons with Roni were a start. In the end, they didn't go so well, through no fault of hers. It was striking just how difficult it was for the boys, who were five, seven, and ten, to pay visual attention, to adjust to the way of interacting that was required in order to sign. It didn't help that our lessons were at seven o'clock at night and the boys were tired. I spent more time each session reining them in than learning to sign. The low point came one night when Alex persisted in hanging upside down and backward off an armchair.
“I can see her,” he insisted.
And yet he was curious about the language. I could tell from the way he played with it between lessons. He decided to create his own version, which seemed to consist of opposite signs: Y
ES
was
NO
and so forth. After trying and failing to steer him right, I concluded that maybe experimenting with signs was a step in the right direction.
Even though we didn't get all that far that spring, there were other benefits. At the last session, after I had resolved that one big group lesson in the evening was not the way to go, Alex did all his usual clowning around and refusing to pay attention. But when it was time for Roni to leave, he gave her a powerful hug that surprised all of us.
“She's deaf like me,” he announced.
T
o my left, a boisterous group is laughing. To my right, there's another conversation under way. Behind me, too, people are talking. I can't make out the details of what they're saying, but their voices add to the din. They sound happy, as if celebrating. Dishes clatter. Music plays underneath it all.
A man standing five feet in front of me is saying something to me.
“I'm sorry,” I call out, raising my voice. “I can't hear you.”
Here in the middle of breakfast at a busy restaurant called Lou Malnati's outside Chicago, the noise is overpowering. Until it's turned off.
I'm not actually at a restaurant; I'm sitting in a soundproof booth in the Department of Speech and Hearing Science at Arizona State University. My chair is surrounded by eight loudspeakers, each of them relaying a piece of restaurant noise. The noise really was from Lou Malnati's, but it happened some time ago. An engineer named
Lawrence Revit set up an array of eight microphones in the middle of the restaurant's dining room and recorded the morning's activities. The goal was to create a real-world listening environment, but one that can be manipulated. The recordings can be played from just one speaker or from all eight or moved from speaker to speaker. The result is remarkably realâchaotic and lively, like so many restaurants where you have to lean in to hear what the person sitting across from you is saying.
The man at the door trying to talk to me is John Ayers, a jovial eighty-two-year-old Texan who has a cochlear implant in each ear. Once the recording has been switched off, he repeats what he'd said earlier.
“It's a torture chamber!” he exclaims with what I have quickly learned is a characteristic hearty laugh.
Ayers has flown from Dallas to Phoenix to willingly submit himself to this unpleasantness in the name of science. Retired from the insurance business, he is a passionate gardener (he brought seeds for the lab staff on his last visit) and an even more passionate advocate for hearing. After receiving his first implant in 2005 and the second early in 2007, he has found purpose serving as a research subject and helping to recruit other participants.
“Are you ready?” asks Sarah Cook, the graduate student who manages ASU's Cochlear Implant Lab and will run the tests today.
“Let me at it!” says Ayers.
After he bounds into the booth and takes his seat, Cook closes the two sets of doors that seal him inside. She and I sit by the computers and audiometers from which she'll run the test. For the best part of the next two hours, Ayers sits in the booth, trying to repeat sentences that come at him through the din of the restaurant playing from one or more speakers.
 â¢Â â¢Â â¢Â
Hearing in noise remains the greatest unsolved problem for cochlear implants and a stark reminder that although they now provide tremendous benefit to many people, the signal they send is still exceedingly limited. “One thing that has troubled me is sometimes you hear people in the field talking about [how people have] essentially normal hearing restored, and that's just not true,” says Don Eddington of MIT. “Once one is in a fairly noisy situation, or trying to listen to a symphony, cochlear implants just aren't up to what normal hearing provides.”
It wasn't until Alex lost his hearing that I properly heard the noise of the world.
Harvey Fletcher of Bell Labs described noise as sounds to which no definite pitch can be assigned, and as everything other than speech and music. Elsewhere, I've seen it defined as unwanted sound. The low hum of airplane cabins or car engines, sneakers squeaking and balls bouncing in a gym, air conditioners and televisions, electronic toys, a radio playing, Jake and Matthew talking at once, or tap water running in the kitchen. All of it is noise and all of it makes things considerably harder for Alex. Hearing aids aren't selective in what they amplify. Cochlear implants can't pick and choose what sounds to process. So noise doesn't just make it harder to understand what someone is saying; ironically, it can also be uncomfortably loud for a person with assistive devices. Some parents of children with implants or hearing aids stop playing music completely at home in an effort to control noise levels. Many people with hearing loss avoid parties or restaurants. We haven't gone quite that far, but I was continually walking into familiar settings and hearing them anew.
To better assess how people with hearing loss function in the real world, audiologists routinely test them “in noise” in the sound booth. The first time Lisa Goldin did that to Alex was the only time he put his head down on the table and refused to cooperate. She was playing something called “multitalker babble,” which sounded like simultaneous translation at the United Nations. Even for me, it was hard to hear the words Alex was supposed to pick out and repeat. Lisa wasn't trying to be cruel. An elementary school classroom during lunchtime or small group work can sound as cacophonous as the United Nations. Even then, hearing children are learning from one another incidentally. Like so much else in life, practice would make it easier for Alex to do the sameâthough he'll never get as much of this kind of conversation as the othersâand the test would allow Lisa to see if there was any need to adjust his sound processing programs to help.
Hearing in noise is such a big problemâand such an intriguing research questionâthat it has triggered a subspecialty in acoustic science known as the “cocktail party problem.” Researchers are asking, How does one manage to stand in a crowd and not only pick out but also understand the voice of the person with whom you're making small talk amid all the other chatter of the average gathering? Deaf and hard-of-hearing people have their own everyday variation: the dinner table problem. Except that unlike hearing people at a party, deaf people can't pick out much of anything. Even a mealtime conversation with just our family of five can be hard for Alex to follow, and a restaurant is usually impossible. His solution at a noisy table is to sit in my lap so I can talk into his ear, or he gives up and plays with my phoneâand I let him.
“If we understand better how the brain does it with normal hearing, we'll be in a better position to transmit that information via cochlear implants or maybe with hearing aids,” says Andrew Oxenham, the auditory neuroscientist from the University of Minnesota. Intriguingly,
understanding the cocktail party problem may not only help people with hearing loss but could also be applied to automatic speech recognition technology, too. “We have systems that are getting better and better at recognizing speech, but they tend to fail in more complicated acoustic environments,” says Oxenham. “If someone else is talking in the background or if a door slams, most of these programs have no way of telling what's speech and what's a door slamming.”
The basic question is how we separate what we want to listen to from everything else that's going on. The answer is that we use a series of cues that scientists think of as a chain. First, we listen for the onset of new sounds. “Things that start at the same time and often stop at the same time tend to come from the same source. The brain has to stream those segments together,” says Oxenham. To follow the segments over time, we use pitch. “My voice will go up and down in pitch, but it will still take a fairly smooth and slow contour, so that you typically don't get sounds that drastically alter pitch from one moment to the next,” says Oxenham. “The brain uses that information to figure out, well, if something's not varying much in pitch, it probably all belongs to the same source.” Finally, it helps to know where the sound is coming from. “If one thing is coming from the left and one thing is coming from the right, we can use that information to follow one source and ignore another.”
The ability to tell where a sound is coming from is known as
spatial localization. It's a skill that requires two ears. Anyone who has played Marco Polo in the pool as a child will remember that people with normal hearing are not all equally good at this, but it's almost impossible for people with hearing loss. This became obvious as soon as Alex was big enough to walk around the house by himself.
“Mom, Mom, where are you?” he would call from the hall.
“I'm here.”
“Where?”
“Here.”
Looking down through the stairwell, I could see him in the hall one floor and perhaps fifteen feet away, looking everywhere but at me.
“I'm here” wouldn't suffice. He couldn't even tell if I was upstairs or downstairs. I began to give the domestic version of latitude and longitude: “In the bathroom on the second floor.” Or “By the closet in Jake's room.”
To find a sound, those with normal hearing compare the information arriving at each ear in two ways: timing and intensity. If I am standing directly in front of Alex, his voice reaches both of my ears simultaneously. But if he runs off to my right to pet the dog, his voice will reach my right ear first, if only by a millionth of a second. The farther he moves to my right, the larger the difference in time. There can also be a difference in the sound pressure level or intensity as sounds reach each ear. If a sound is off to one side, the head casts a shadow and reduces the intensity of the sound pressure level on the side away from the source.
Time differences work well for low-frequency waves. Because high-frequency waves are smaller and closer together, they can more easily be located with intensity differences. At 1,000 Hz, the sound level is about eight decibels louder in the ear nearer the source, but at 10,000 Hz it could be as much as thirty decibels louder. At high frequencies, we can also use our pinna (the outermost part of the ear) to figure out if a sound is in front of us or behind. Having two ears, then, helps with the computations our brain is constantly performing on the information it is taking in. We can make use of the inherent redundancies to compare and contrast information from both ears.
Hearing well in noise requires not just two ears but also a level of acoustic information that isn't being transmitted in today's implant. A waveform carries information both in big-picture outline and in fine-grained detail. Over the past ten years, sound scientists have been intensely interested in the difference, which comes down to timing. To represent the big picture, they imagine lines running along the top and bottom of a particular sound wave, with the peaks and troughs of each swell bumping against them. The resulting outline is known as the envelope of the signal, a broad sketch of its character and outer limits that captures the slowly varying overall amplitude of the sound. What Blake Wilson and Charlie Finley figured out when they created their breakthrough speech processing program, CIS, was how to send the envelope of a sound as instructions to a cochlear implant.
The rest of the information carried by the waveform is in the fine-grained detail found inside the envelope. This “
temporal fine structure” carries richness and depth. If the envelope is the equivalent of a line drawing of, for example, a bridge over a stream, fine structure is Monet's painting of his Japanese garden at Giverny, full of color and lush beauty. The technical difference between the two is that the sound signal of the envelope changes more slowly over time, by the order of several hundred hertz per second, whereas “fine structure is the very rapidly varying sound pressure, the phase of the signal,” says Oxenham. In normal hearing, the fine structure can vary more than a thousand times a second, and the hair cells can follow along.
An implant isn't up to that task. So far, researchers have been stymied by the limits of electrical stimulationâor more precisely by its excesses. When multiple electrodes stimulate the cochlea, in an environment filled with conductive fluid, the current each one sends spreads out beyond the intended targets. Hugh McDermott, one of the Melbourne researchers, uses an apt analogy to capture the problem. He describes the twenty-two electrodes in the Australian cochlear implant as twenty-two cans of spray paint, the neurons you're trying to stimulate as a blank canvas, and the paint itself as the electrical current running between the two. “Your job in creating a sound is to paint something on that canvas,” says McDermott. “Now the problem is, you can turn on any of those cans of paint anytime you like, but as the paint comes out it spreads out. It has to cross a couple meters' distance and then it hits the canvas. Instead of getting a nice fine line, you get a big amorphous blob. To make a picture of some kind, you won't get any detail. It's like a cartoon rather than a proper painting.” In normal hearing, by contrast, the signals sent by hair cells, while also electrical, are as controlled and precise as the narrowest of paintbrushes.
So while it seems logical that more electrodes lead to better hearing, the truth is that because of this problem of current spread, some of the electrodes cancel one another out.
René Gifford, of Vanderbilt University, is working on a three-way imaging process that allows clinicians to determineâor really improve the odds on guessingâwhich electrodes overlap most significantly, and then simply turn some off. “Turning off electrodes is the newest, hottest thing,” says
Michael Dorman of Arizona State University, who shared Gifford's results with me. Gifford is a former member of Dorman's laboratory, so he's rooting her on. Half of those she tested benefited from this strategy. Other researchers are working on other ideas to solve the current-spread problem. Thus far, the best implant manufacturers have been able to do is offer settings that allow a user to reduce noise if the situation requires it. “It's more tolerable to go into noisy environments,” says Dorman. “They may not understand anything any better, but at least they don't have to leave because they're being assaulted.”
 â¢Â â¢Â â¢Â
In a handful of labs like Dorman's, where Ayers and Cook are hard at work, the cocktail party problem meets the cochlear implant. “You need two ears of some kind to solve the cocktail party problem,” says Dorman, a scientist who, like Poeppel, enjoys talking through multiple dimensions of his work. For a long time, however, no one with cochlear implants had more than one. The reasons were several: a desire to save one ear for later technology; uncertainty about how to program two implants together; andâprobably most significantâcost and an unwillingness on the part of insurance companies to pay for a second implant. As I knew from my experience with Alex, for a long time it was also uncommon to use an implant and hearing aid together. Within the past decade, and especially the past five years, that has changed dramatically. If a family opts for a cochlear implant for a profoundly deaf child, it is now considered optimal to give that child two implants simultaneously at twelve months of ageâor earlier. In addition, as candidacy requirements widen, there is a rapidly growing group of implant users with considerable residual hearing in the unimplanted ear. Some even use an implant when they have normal hearing in the other ear.