I Can Hear You Whisper (18 page)

Read I Can Hear You Whisper Online

Authors: Lydia Denworth

BOOK: I Can Hear You Whisper
5.12Mb size Format: txt, pdf, ePub
15
A P
ERFECT
S
TORM

A
t first, doctors thought eighteen-month-old
Caitlin Parton had the flu. When they finally realized the toddler with her father's brown hair and her mother's blue eyes had meningitis, she was rushed to the hospital and stayed there for days. Doctors saved her life, but not her hearing. When she went home to her family's Manhattan apartment, Caitlin was profoundly deaf. Her parents, Steve Parton, an artist, and Melody James, an actress and director, were in shock.
“People speak of the grief they feel on learning of their baby's hearing loss,” James said years later. “For me, it felt like steep walls were suddenly in the path of our child's possibilities.”

It was 1987.
Fewer than three thousand people in the world had cochlear implants, nearly all of them adults who had lost their hearing after learning language. Those who'd heard before were thought to be more likely to succeed with the new device. “We didn't know how long their memories for sound would stay, but one assumed that they could tell us better what it was like,” says Graeme Clark. For Clark and many other researchers, however, adults were just the beginning. “Deep down, I hoped it would help children,” Clark says. “To give them an opportunity to communicate in the world of sound was really my life's work. But I couldn't do it until I'd done it in adults.”

Whether a young brain that had never heard could make sense of what it received through an implant was a critical question. And it was one for which there was not yet a good answer. The explosion of studies in neuroscience and brain imaging lay a few years ahead. Helen Neville had just begun her studies of the brains of deaf and blind subjects in 1983. “Helping children was a completely different ball game,” says Clark. “On the one hand, the theory was that [with] children, because their brains are supposedly plastic, you could give them anything and they might understand it. On the other hand, the contrary argument was: If they've never been exposed to sound, then these artificial electrical signals aren't going to be as good as the real thing. The dilemma was: Which hypothesis was correct?”

The stakes were different for children as well. “For kids, of course, what really counts is their language development,” Richard Dowell told me. In addition to working with Rod Saunders and George Watson, Dowell had to figure out how to test the implant's viability in children. “In kids, you're trying to give them good enough hearing to actually then use that to assist their language development as close to normal as possible. So the emphasis changes very, very much when you're talking about kids.”

Looming over the scientific question was the ethical question of whether it was right to subject young children to what amounted to experimental surgery. Clark's team tested issues of biological safety aggressively. But an implant for a child might be in place for decades, for a lifetime. Were the unproven possibilities worth the unknown risks—or even the known risks that accompany any surgery? Many clinicians were skeptical, sometimes angrily so. In 1984, one prominent otolaryngologist was quoted in
Medical World News
as saying, “
There is no moral justification for an invasive electrode for children.” He told the journal he found the cochlear implant a costly and “cruel incentive,” designed to appeal to conscientious parents who may seek any means that will enable their children to hear. “It's a toboggan ride for those parents, and at the end of the ride is only a deep depression and you may hurt the kid.”

For Bill House, it was meeting families of children who couldn't hear that had catalyzed his interest in deafness. Just as he hadn't let scientific and clinical skepticism stop him from operating on adults, he also pushed forward with children who'd lost their hearing to meningitis. As early as 1981, he had put his single-channel implant into the youngest person to be implanted to that point, three-year-old
Tracy Husted. In 1984, two-year-old Matt Fiedor became the seventy-third child to get one of House's implants. His mother, Paulette, told me, “I was convinced that he had nothing, and if he could get any benefit from this device, something would be better than nothing.”

There were no guarantees. Results to that point were extremely limited. Only about one in twenty recipients of any cochlear implant could carry on a conversation without speechreading.
Consensus was growing, however, that the Bilger report had been correct and a multichannel implant was the most promising way forward. That view would be made official in a statement released following a 1988 conference convened by the National Institutes of Health.

After his implant was approved for adults in 1985, Clark began to work cautiously backward. That same year, he operated first on fifteen-year-old Peter Searle, then a ten-year-old named Scott Smith and, in 1986, five-year-old Bryn Davies. All three had lost their hearing as young children. The older the child, the longer he had been deaf. “The fifteen-year-old you could show some detection of sound but very limited benefit,” remembers Dowell, who worked with all three boys, using some twenty-five speech production and perception tests—far more than are used today. The tests couldn't be very hard, since Dowell needed measures the kids might actually be able to achieve. He asked the boys to distinguish between one- and two-syllable words. He gave them closed-set and open-set words and sentences to repeat. “The ten-year-old was maybe a bit better.” Bryn Davies, the youngest, had only been without hearing for two years when he got his implant at the age of five. Meningitis had ossified some of his ear canal and made it difficult to insert the electrodes very far. Nonetheless, his results were better than those of the two older boys. “He showed some promise,” says Dowell, “enough to make you think, Aha! Now we're getting to somewhere.” At that point, clinical trials, the round of research in which larger groups of children would get the implant and be closely watched and evaluated for several years, began.

 • • • 

In search of help, Caitlin Parton's parents had found their way to a New York City organization called the League for the Hard of Hearing, which offered information, speech therapy, and support groups. (Today, it is the Center for Hearing and Communication.) Because Caitlin had begun life with hearing, the Partons wanted to try an oral approach. Caitlin got hearing aids and began speech therapy, but the aids didn't help much. The League for the Hard of Hearing had been enlisted by
Cochlear, the Australian company developing Clark's implant, and one of his collaborators,
Dr. Noel Cohen of New York University Medical Center, to help find candidates for the clinical trials with children. As a preliminary step,
the FDA had specified that the first clinical trial should include only children who had been born hearing and then become deaf, rather than those born deaf.

Caitlin was perfect.


They said this device would give Catie a greater awareness of environmental sounds. That's all they promised us,” said Steve Parton in an interview some years later, “that she might be able to hear cars honking and dogs barking. As parents living in New York City, that sounded pretty good.”

It was no small thing, though, to have a child in an experimental trial. “
There were no other families to talk with, no children to observe, no research studies to pore over and compare, no Internet, no listservs, no Twitter, no one to look at and speak to and share experiences with. There was no track record for the children,” said Melody James. “It was like skating out on thin ice.”

At the age of two and a half, Caitlin Parton became one of the youngest people in the world—and one of the first children that young in the United States—to receive a multichannel cochlear implant.

 • • • 

The winds of technological change were blowing. Facing into the gathering breeze, the Deaf community was determinedly flexing muscles it hadn't known existed, and the national media was paying attention. The spring of 1994 brought
a new round of protests, this time at New York's Lexington School for the Deaf, a historic oral deaf school. When a hearing man named R. Max Gould was named chief executive officer of the Lexington Center, the institution that includes the school, Deaf leaders felt they had been left out of the search process. Wearing Deaf Pride T-shirts and carrying placards with messages like
BOARD WHO CAN HEAR DON'T LISTEN
, students and faculty organized days of protests at the school and at the offices of local politicians. After a week of pressure, Gould resigned and a deaf board president was installed to oversee the new search.

The Lexington protests were covered in a long feature in
The
New York Times Magazine
by Andrew Solomon called “Defiantly Deaf.” (It was this story that led to his 2012 book,
Far from the Tree: Parents, Children, and the Search for Identity
.) The year before,
The Atlantic Monthly
also ran an in-depth and much-talked-about article called “Deafness as Culture,” exploring the central idea behind the movement: that deafness was not a disability. “
The deaf community has begun to speak for itself,” wrote author Edward Dolnick. “To the surprise and bewilderment of outsiders, its message is utterly contrary to the wisdom of centuries: Deaf people, far from groaning under a heavy yoke, are not handicapped at all.” More than that, they were celebrating. At Lexington's commencement ceremony a few weeks after the successful protests, speaker Greg Hlibok, who had been one of the student leaders of the Deaf President Now movement, declared: “From the time God made earth until today, this is probably the best time to be Deaf.”

There was a paradox here. Along with the spread of computers and the advent of e-mail, which radically improved communication for deaf people, an important reason it was good to be deaf at that moment in the United States was the passage of the
Americans with Disabilities Act (ADA) in 1990. The ADA defines a disability as an impairment that “substantially limits a major life activity” and outlaws discrimination on the basis of disability in employment, education, transportation, telecommunications, and public accommodation (restaurants, theaters, sports stadiums, hotels, and the like). The telecommunications provision directly concerns people “with hearing and speech disabilities” and requires telephone companies to provide TTY, a keyboard system connected to telephones, and relay services that make use of a third party to allow deaf people to communicate by phone with hearing people. As a result, to take just one example, when a deaf person checks into a hotel today, he can expect the television to have captions, the phone to include TTY service, and flashing lights for the fire alarm. The ADA requires ASL interpreters in schools and at public meetings. Most provisions of the law were welcomed because they truly made it easier for deaf people to operate independently. As one commentator put it, the new law
“leveled the playing field.”

How could the Deaf reject “disability” as a concept that applied to them but accept the benefits of the Americans with Disabilities Act? It was an inconsistency that some found unsustainable. In a 1998 article for the nonpartisan bioethics research institute the Hastings Center, Bonnie Poitras Tucker, a disability law expert who is deaf herself, endorsed the provisions of the ADA as a way of allowing those with disabilities to take their rightful place in society. But with deaf people's newfound rights, argued Tucker, “come responsibilities.” Citing the extensive cost of deafness—an estimated “$2.5 billion per year in lost workforce productivity; $121.8 billion in the cost of education; and more than $2 billion annually for the cost of equal access, Social Security Disability Income, Medicare, and other entitlements of the disabled”—Tucker made the argument (extreme to some) that “when most deafness becomes correctable . . . an individual who chooses not to correct his or her deafness (or the deafness of his or her child) will lack the moral right to demand that others pay for costly accommodations.” (She added that cochlear implants were not likely to ever eliminate deafness altogether but claimed they might significantly reduce its “ramifications.”)

Even in the Deaf culture camp, where views like Tucker's were anathema, some wrestled with the problem of how to reconcile the need for accommodations with their proud view of their experience. “Part of the odyssey I've made,” a deaf adult named
Cheryl Heppner told
The Atlantic Monthly
's Dolnick, “is in realizing that deafness is a disability, but it's a disability that is unique.” Others argued that since the law changed the environment, it provided access on deaf terms.

A cochlear implant, on the other hand, alters the person. And that, for many, was a problem. The Food and Drug Administration's
1990 decision to approve cochlear implants for children as young as two galvanized Deaf culture advocates. They saw the prostheses as
just another in a long line of medical “fixes” for deafness. None of the previous ideas had worked, and it wasn't hard to find doctors and scientists who maintained that this wouldn't work either—at least not well. Beyond the complaint that the potential benefits of implants were dubious and unproven, Deaf culture advocates objected to the very premise that deaf people needed to be fixed at all. “I was upset,” Ted Supalla told me. “I never saw myself as deficient ever. The medical community was not able to see that we could possibly see ourselves as perfectly fine and normal just living our lives. To go so far as to put something technical in our brains, at the beginning, was a serious affront.” Waving his hand out the window at the buildings of Georgetown University Medical Center, where he is now employed, he gives a small laugh. “It's odd that I find myself working in a medical community. . . . It's a real indication that times are different now.”

The Deaf view was that late-deafened adults were old enough to understand their choice, had not grown up in Deaf culture, and already had spoken language. Young children who had been born deaf were different. The assumption was that cochlear implants would remove children from the Deaf world, thereby threatening the survival of that world. That led to complaints about “genocide” and the eradication of a minority group. Furthermore, implants would not necessarily deliver deaf children to the hearing world. Instead, the argument went, the children risked being adrift between the two, neither Deaf nor hearing. The Deaf community felt ignored by the medical and scientific supporters of cochlear implants; many believed deaf children should have the opportunity to make the choice for themselves once they were old enough; still others felt the implant should be outlawed entirely. “It felt like history repeating itself with a new vocabulary and new types of coercion,” Carol Padden told me. Tellingly, the ASL sign developed for
COCHLEAR IMPLANT
was two fingers stabbed into the neck, vampire-style.

Other books

Twisted Lies 2 by Sedona Venez
Behold the Dawn by Weiland, K.M.
Margaret Moore by A Rogues Embrace
Carthage by Oates, Joyce Carol