I Can Hear You Whisper (16 page)

Read I Can Hear You Whisper Online

Authors: Lydia Denworth

BOOK: I Can Hear You Whisper
13.44Mb size Format: txt, pdf, ePub

So Tong and Clark hit on plan B, which was to try to extract the elements that convey the most information in speech and send only those through the implant. The new version was almost as simple as the first one had been complex. It made use of formants, the bands of dominant energy first described at Bell Labs that vary from one sound to another. If we produced sounds only with the larynx, we wouldn't get anything but low frequencies. However, those low frequencies contain harmonics up to ten or twenty times higher. If the larynx is vibrating a hundred times per second—at 100 Hz—the harmonics are at 200 and 300 Hz, up to 2,000 Hz or higher. “As you move your lips and tongue and open and close the flap that's behind your nose to create speech sounds, all those things modify the amplitude of all the different harmonics,” says Hugh McDermott, an acoustic engineer who joined Clark's team in 1990. For each sound, the first region where the frequencies are emphasized—where the energy is strongest—is called the first formant, the next one going up is the second formant, and so on. The new speech processing program was known as
F0F2 (or “F naught, F two” when Australians speak of it). That meant that the program extracted only two pieces of information from each speech sound: its fundamental frequency (F0) and the second formant (F2). “It's the first two formants that contain nearly all of the information that you need to understand speech,” says McDermott. The second formant is also difficult to see on the lips, so it was a particularly useful extra piece of information. “If you have to nail down just one parameter,” says McDermott, “that's the one to choose.”

F0F2, in other words, was lean and mean. Recognizing speech on the basis of a few formants is like identifying an entire mountain range from the outline of only its two most distinctive peaks. It worked by having one electrode present a rate of electric pulses that matched the vibration—the fundamental frequency—of the larynx. Then the second formant, which might be as high as 1.5 kHz (kilohertz), was represented on a second electrode. That second formant moved around from electrode to electrode according to the speech sounds created. F0F2 sounded even more mechanical and synthetic than implants do today, and it took a lot of getting used to, but it worked because only one electrode was on at a time, eliminating the problem of overstimulation.

With this new processing system in place, Saunders began to understand some limited speech. He could be 60 to 70 percent correct on tests that asked him to identify the vowels embedded in words: “heed,” “hard,” “hood,” “had,” and so on. As the end of 1978 neared and money was running short again, Clark insisted they try Saunders on a harder test: what's known as open-set speech recognition. Until then, they had done only closed sets—reciting words that were part of familiar categories, such as types of fruit. Speech in real life, of course, isn't so predictable; open-set testing throws wide the possibilities. Angela Marshall was hesitant, fearing it wouldn't work. “I said, if we fail, we fail,” says Clark. As the group stood watching with bated breath, Marshall presented one unrelated word at a time.

“Ship,” she said.

“Chat,” replied Saunders. Completely wrong.

“Goat.”

“Boat,” said Saunders. Closer.

“Rich.”

“Rich,” said Saunders. He had gotten one right!

By the end of the tests, Saunders had gotten 10 to 20 percent of the open-set words correct. That's not the least bit impressive by today's standards, but it was hugely significant at the time for a man who was profoundly deaf. Clark was overcome. “I knew that had really shown that this was effective,” he says. He pointed down the hallway of the hospital where we sat. “I was so moved, I simply went into the lab there and burst into tears of joy.” The only other time he cried in his adult life, he told me, was when he and Margaret worried over the health of one of their five children.

George Watson became the second Australian to be implanted, in July 1979. His early audiological results were promising enough that Lois Martin, who had taken over for Angela Marshall, “decided to go for broke,” says Clark, and read Watson some lines from the daily newspaper, then asked him to repeat what she'd said. “He'd nearly got it all right,” says Clark, who was sitting in his office up the hall working at the time. “There was great excitement and they came rushing up the corridor to tell me,” he remembers. They had wondered what Watson's brain would remember of sound. “George was showing us that he could remember the actual sound that he'd heard thirteen years before.” Clark had only two patients at this point, “but the two of them together told us a lot about what was possible.”

Not that there weren't some disasters. Until that time, both Saunders and Watson were only able to use their implants in the laboratory, hooked up to the computer. Clark instructed an engineer named Peter Seligman to develop a portable device that they could take home. “It was as big as a handbag,” remembers Dowell. “We thought it was fantastic.” They called a press conference to announce that two patients were successfully using a portable cochlear implant.

“That day, George's implant failed,” says Dowell. “I was preparing him to go out there to talk to the press, and he told me that it had stopped working.” Watson reckoned he could wing it using his speechreading skills. And he did. Watson told the assembled reporters how wonderful his implant was. “None of those guys who were there that day would have had a clue that he wasn't actually hearing anything,” says Dowell. Rod Saunders, on the other hand, although his implant was working, appeared to be struggling more because he had such a hard time reading lips. “It was reported as a great breakthrough,” Dowell says, laughing. “It's so long ago I can tell the story.” Clark, too, is willing to tell the story today. “If they'd asked, I would have to have said the implants failed. I just sort of held my breath and they didn't ask.”

 • • • 

Meanwhile, in San Francisco, another team had been pursuing a similar path. Auditory neuroscientist Michael Merzenich had been recruited to the University of California, San Francisco, in part because a doctor there, Robin Michelson, was pursuing a cochlear implant. Two of Michelson's patients participated in the review performed at the University of Pittsburgh by Robert Bilger. The head of UCSF's otolaryngology department, Francis Sooy, thought the idea of cochlear implants had merit, but that it needed a more scientific approach.

“When I showed up at UCSF, I was intrigued by the idea, but when I talked to Michelson, I realized that he understood almost nothing about the inner ear and nothing about auditory coding issues,” Merzenich told me. “He had the idea if we just inject the sound, it will sort it out. He talked about what he was seeing in his patients in a very grandiose way.” As an expert in neurophysiology, Merzenich found such talk hard to take, especially since he was engaged in a series of studies that revealed new details about the workings of the central auditory system. He discovered that information wasn't just passed along from level to level—from the brain stem to the cochlear nuclei to the superior olive and so on. Along the way, the system both pulled out and put back information, always keeping it sorted by frequency but otherwise dispersing information broadly. “
Our system extracted information in a dozen ways; combined it all, then extracted it again; then combined and extracted again; then again—just to get [to the primary auditory cortex],” Merzenich later wrote. “Even this country boy could understand the potential combinative selectivity and power of such an information processing strategy!” On the other hand, he could also see how hard it would be to replicate.

Robin Michelson, it must be said, was a smart man. He originally trained as a physicist, two of his children became physicists, and his uncle Albert Michelson won the Nobel Prize in Physics. Several people described him to me as a passionate tinkerer, with grand ideas but not always the ability to see them through. Merzenich couldn't really see how such a device as Michelson described would ever be usable clinically as anything more than a crude aid to lipreading. Unresolved safety issues worried him as well. “The inner ear is a very fragile organ,” he says, “and I thought that surely introducing these electrodes directly into the inner ear must be pretty devastating to it and must carry significant risk.”

Besides, Merzenich was soon busy conducting what would turn out to be revolutionary studies on brain plasticity. That was one of the reasons I had been so eager to talk to him. For someone like me, who wanted to know about both cochlear implants and how the brain changed with experience, there was no other researcher in the world who had played such a major role in both arenas. By the time I wrote to him, Merzenich was serving as chief scientific officer (and cofounder) at a scientific learning development company called Posit Science, whose flagship product is BrainHQ, a brain training platform. We met in their offices in downtown San Francisco. “If you think about it,” he told me, “[the cochlear implant] is the grandest brain plasticity experiment that you can imagine.”

As a postdoctoral fellow at the University of Wisconsin, Merzenich had participated in an intriguing study showing changes in the cortex of macaque monkeys after nerves in their hands were surgically repaired. He hadn't quite tumbled to the significance of these findings, but shortly after he arrived at UCSF in 1970, he decided to pursue this line of research as well as his auditory work. Together with Jon Kaas, a friend and colleague at Vanderbilt University, Merzenich set up a study with adult owl monkeys. By laboriously touching different parts of the hand and recording where each touch registered in the brain, they created a picture that correlated what happened on the hand to what happened in the brain. With the “before” maps complete, the researchers severed the medial nerve in the monkeys' right hands, leaving the animals unable to feel anything near the thumb and nearby fingers. (Here, too, the requirements of science are discomfiting, to say the least.) According to standard thinking on the brain at the time, that should have left a dead spot in the area that had previously been receiving messages from the nerve. A few months later, after the monkeys had lived with their new condition for a time, Merzenich and Kaas studied the animals' brains. Completely contrary to the dogma of the day, they found that those areas of the brain were not dead at all. Instead, they were alive with signals from other parts of the hand.

In another study, one of Merzenich's collaborators, William Jenkins, taught owl monkeys to reach through their cages and run their fingers along a grooved spinning disk with just the right amount of pressure to keep the disk spinning and their fingers gliding along it. When Jenkins and Merzenich studied the maps of those monkeys' brains, they found that even a task such as running a finger along a disk—much less dramatic than a severed nerve—had altered the map. The brain area responding to that particular finger was now four times larger than it had been.

Finally, Merzenich understood what he was seeing: that the brain is capable of changing with experience throughout life. It was a finding that changed his career. It also brought considerable controversy. “Initially, the mainstream saw the arguments we made for adult plasticity in monkeys as almost certainly wrong,” he recalls. Major scientists like Torsten Wiesel, whose Nobel Prize had been awarded in part for establishing the critical period, was one of those who said outright that the findings could not be true. It was years before the work was widely accepted.

Right away, however, the work on plasticity prodded Merzenich's thinking about the development of cochlear implants. Robin Michelson had been persistent. “For a year, this very nice man pestered me and pestered me and pestered me,” remembers Merzenich. “He would come to see me at least once or twice a week and tell me more about the wondrous things he'd seen in his patients and bug me to come help him.” For a year, Merzenich resisted. “Finally, in part to get him off my back, I said okay.” Merzenich agreed to do some basic psychophysical experiments—create some stimuli and apply them—and see what Michelson's best patient could really hear.

The patient was a woman named Ellen Bories from California's Central Valley. “A great lady,” says Merzenich fondly. “We started trying to define what she could and couldn't hear with her cochlear implant, and I was blown away by what she could hear with this bullshit device.” Michelson's implant, at that point, was “a railroad track electrode,” as Merzenich had taken to calling it dismissively, that delivered only a single channel of information. While Bories could not recognize speech, she could do some impressive things. In the low-frequency range, up to about 1,000 Hz, she could distinguish between the pitches of different vowels—she could hear and repeat “ah, ee, ii, oh, uh.” She could tell whether a speaking voice was male or female. When they played her some music, says Merzenich, “she could actually identify whether it was an oboe or a bassoon. I was just amazed what she could get with one channel.”

Not surprisingly, he now saw the effort in a whole new light. For one thing, “I realized how much it could mean even to have these simple, relatively crude devices,” he says. “She found it very helpful for lipreading and basic communication and was tickled pink about having this.” Secondly, and significantly, Merzenich saw a way forward. At Bell Labs, researchers had never stopped working on the combination of sound and electricity. In the 1960s, they had explored the limits of minimizing the acoustic signal. “They took the complex signal of voice and reduced it and reduced it and reduced it,” explains Merzenich. “They said, How simply can I represent the different sound components and still represent speech as intelligible?”

Even earlier, in the 1940s, Harvey
Fletcher had introduced the concept of critical bandwidth, a way of describing and putting boundaries on the natural filtering performed by the cochlea. Although there are thousands of frequencies, if those that are close together sound at the same time, one tone can “mask” the other or interfere with its perception, like a blind spot in a rearview mirror. Roughly speaking, critical bandwidth defined the areas within which such masking occurred. How many separate critical bandwidths were required for intelligible speech? The answer was eleven, and Merzenich realized it could apply to cochlear implants, too. The implanted electrodes would be exciting individual sites, each of which represented specific sounds. “The question was how many sites would I need to excite to make that speech perfectly intelligible. Eleven is sort of the magical number.”

Other books

Snow's Lament by S.E. Babin
The Book of Someday by Dianne Dixon
Bloodline by F. Paul Wilson
Forsaking Truth by Lydia Michaels
Dragon Awakened by Jaime Rush
A Mammoth Murder by Bill Crider
The Anti-Prom by Abby McDonald