Authors: Adam Benforado
The technique that has the most empirical support is fMRI lie detection, which connects lying with oxygenated blood flow to certain areas of the brain, but the relevant research has been plagued by a lack of standardized methods, small sample sizes, and inconsistent results.
One of the biggest problems has been that participants are generally aware that they are taking part in an experiment and have been told to lie or tell the truth.
Is a person trying to cover up shooting his roommate really the same as someone instructed to lie about a playing card that he picked out of a deck? Is a college student paid to take part in a study comparable to a habitually lying psychopath with a history of drug abuse facing the possibility of the death penalty?
And could that psychopath cheat the test if he wanted?
We've known for a long time that polygraphs can be gamed, and it turns out that countermeasures are also very effective when it comes to fMRI.
You can render the data unusable just by moving your head slightly, swishing your tongue, curling your toes, or holding your breath.
Changing what you are thinking about during questioning (trying to remember state capitals, say, or counting down from a hundred by threes) may also muddy the results. Much fMRI lie detection is based on the idea that because it is cognitively more difficult to lie than to tell the truth, lying activates more areas of the brain (thus the greater number of red and yellow patches in the image of Gary's brain when he was known to be lying).
But if you make your brain busier when giving an honest answer, it will look more like your brain when you are lying.
Conversely, if you carefully memorize all of your lies so they're simple to recall, you may look like a saint.
Despite these problems, it is conceivable that a day may come when a particular lie-detection approach is accurate 80, 90, or even 99 percent of the time. The question then will be whether
that level of accuracy is high enough. If humans were able to take the margin of error into account and treat expert testimony as just another piece of evidence, it wouldn't matter. But there is reason to doubt that we are up to the task.
Concerns that juries may overvalue scientific evidence have been around for quite a while.
In the 1990s, attorneys, judges, and academics became worried about “white-coat syndrome,” the notion that jurors blindly defer to experts without actually evaluating their testimony.
A decade later, the fears morphed into the “CSI effect”âthe idea that certain television programs (like
CSI, Law & Order
, and
Without a Trace
), along with high-profile cases involving DNA tests, fiber analysis, and fingerprinting databases, had led members of the public to believe that forensic evidence was both widely available and almost infallible.
More recently, researchers have begun looking at how neuroscientific evidence can bias outcomes in criminal trials.
Brain scans may have a particularly strong influence on jurors.
In one experiment, significantly more participants found a defendant guilty after reading that he had failed an fMRI lie-detection test than when the trial summary included evidence of deceit based on polygraph or thermal-imaging results. One of the reasons fMRI may carry such weight is that we may conflate its use in a criminal context with scanning done to identify medical conditions, like tumors or strokes.
Diagnostic imaging, we assume, is diagnostic imaging.
Particularly troubling is the fact that even trained judges do not seem to be immune to the special allure of neuroscience.
In one study, 181 state trial judges read about Jonathan Donahue, who, during an attempted robbery of a Burger King, ended up beating a restaurant manager repeatedly in the head with a pistolâcausing permanent brain damageâbecause “the fat son-of-a-bitch wouldn't stop crying.”
According to testimony from a
psychiatrist, Donahue met the criteria for classification as a psychopath.
In addition, half of the judges were presented with testimony from “a neurobiologist and renowned expert on the causes of psychopathy” who offered results from a genetic test.
The test revealed that Donahue carried a particular gene linked with antisocial behaviorâin particular, with brain-development problems that result in the absence of “a normal violence-inhibition mechanism.”
The other half of the judges did not receive this additional expert testimony.
You might expect that the psychopathy diagnosis would do all of the work at sentencingâit doesn't seem as if knowing the underlying neurological cause would add much. You might also expect that the judges would view the scientific evidence as a reason to lock Donahue up for longer. Psychopaths present a serious future threat, after all.
But you'd be wrong. Bringing in expert testimony on the defendant's antisocial condition turned out to be a double-edged sword.
Judges rated the evidence of psychopathy as aggravating overall, as expected, but the additional neurobiological explanation of psychopathy resulted in significantly
reduced
sentencesâabout 7 percent shorter, on average. The neurobiologist's testimony changed how judges thought about the defendant's condition.
With the source of Donahue's behavior located in the brain, he suddenly seemed less in control of his actions and less blameworthy.
The question, of course, is whether the expert opinion here
ought
to be given such weight, and many neuroscientists are concerned, not only because applying general research on genetics and brain function to a particular individual is dubious, given the state of the existing science, but also because it is unclear why explaining the particular biological mechanism of psychopathy ought to have an impact on sentencing.
After all, the judges had already been provided with testimony explaining the psychology of psychopathy and how resistant it is to treatment. The key point is that the person has an impairment for which he is not responsible.
It should not make any difference whether that impairment is the result of being emotionally abused as a child or possessing a certain gene or experiencing a series of concussions.
During three days of pretrial testimony concerning the former Army Ranger Gary Smith's fMRI evidence, Judge Johnson was presented with some of the research we've just discussed.
Now he had to decide whether to allow Professor Haist's findings to come before the jury.
It was never going to be an easy decisionâhere was a man accused of murdering a friend and roommate; his life was in the balance.
All he asked was that jurors be able to consider this potentially exculpatory evidence. If you were accused of murder and you had evidence that you were innocentâeven if it was based on unsettled scienceâshouldn't you at least be able to show it to the people deciding your fate?
No, said Judge Johnson, the brain scans had to be kept out of the courtroom.
Gary Smith was ultimately convicted of involuntary manslaughter and sentenced to twenty-eight years in prison, with the case now working its way through the appellate courts.
Judge Johnson took the path of caution. But that does not mean that all judges will resist the allure of neuroscience-based lie detection and wait for the field to develop. There has already been a major shot across the bow, half a world away.
On June 12, 2008, in Mumbai, India, Judge Shalini Phansalkar-Joshi handed down the first-ever decision convicting someone of murder based in part on a brain scan.
It started as a love story.
Aditi Sharma and Udit Bharati met when they were just teenagers.
They dated as students at Mahantsingh Engineering College, and after getting engaged they headed off together to business school at the Indian Institute of Modern Management in Pune.
But Aditi soon broke off the engagement, having fallen for a fellow MBA classmate, Pravin Khandelwal.
Aditi and Pravin dropped out of school and moved
to another stateâdisappointing, no doubt, for Aditi's parents, but nothing out of the ordinary.
Six months later, though, Aditi was back in Pune, allegedly arranging to meet Udit at a McDonald's.
Udit would not survive to the next night.
According to prosecutors, Aditi poisoned Udit with arsenic-laced candy that she offered him as the two sat talking.
The turning point in the case came when Aditi consented to being hooked up to an EEG after an initial polygraph test suggested her involvement in Udit's death.
As she sat with thirty-two electrodes attached to her head, technicians read aloud various innocuous statements, along with first-person statements about key facts in the case: “I bought arsenic.” “
I met Udit at McDonald's.”
According to investigators administering the Brain Electrical Oscillations Signature test (BEOS), the electrical signals coming from the surface of her scalp were damning.
When she heard the details of the crime, specific regions of the brain involved in reliving past experiences became active.
Aditi had not just heard about the murder; she had “experiential knowledge” of itâshe was the murderer.
And what is particularly astonishing is that for the investigators to reach this conclusion, Aditi didn't have to say a word.
The BEOS results provided key evidence for Judge Phansalkar-Joshi; she included no less than nine pages in her decision explaining and defending them.
Because Aditi was later released on bail by the Bombay High Court pending her appeal (which may take years to work out), it is easy to write this case off as an anomaly likely to be remedied by the existing judicial process. But that would be a serious mistake, because lawyers around the world are currently working to bring in similar evidence in an array of different contexts.
In the United States, courts have resisted allowing neuroscience-based lie-detection technology to come before a jury, but judges have permitted brain images to be used to challenge a witness's
testimony and to mitigate a defendant's responsibility for a crime.
Over the last decade, criminal defense attorneys have introduced neurological evidence in hundreds of cases, and the trend is increasing.
So, for example, when Grady Nelson killed his wife and stabbed and raped her eleven-year-old daughter, his lawyer convinced a Miami judge to allow a neuroscientist to testify about abnormalities in Nelson's brain. That evidence appears to have saved his life.
Two jurors who came out against his execution reported that it was the neuroscience that had made them vote as they did. “
It turned my decision all the way around,” one of them explained. “
The technology really swayed meâ¦.After seeing the brain scans, I was convinced this guy had some sort of brain problem.”
If either of those jurors had voted differently, Nelson would have been sentenced to death.
Even when lie-detection technology is barred from the courtroom, it can still have a powerful influence on our justice system.
Although the polygraph has been kept out of criminal trials for years, it has managed to play a critical part in many convictions.
Polygraphs are regularly used in criminal investigations at both the federal and the local level.
As we saw in Juan Rivera's wrongful conviction case, detectives can even lie to suspects about the results in hopes that it will prompt a confession, and because the polygraph has the patina of “hard science,” it can have that effect even on those who are innocent.
Kevin Fox, for instance, was told by police that he would be cleared as a suspect in the rape and murder of his young daughter, Riley, if he passed a polygraph.
The test administrator, however, lied to Fox, allegedly telling him that the polygraph was absolutely reliable and admissible in Illinois state court and that it showed that he was the perpetrator.
With that seemingly devastating strike against him, Fox, like Rivera, didn't see any other way forward but to confess to a horrific crime he did not commit.
Polygraphs are a routine part of probation and parole processes, including the regulation of released sex offenders.
In New
Jersey, for example, almost all of the state's 5,600 supervised sex offenders must be given at least one polygraph each year.
Hooked up to the machine, they may be asked whether they've had any unsupervised contact with minors, abused drugs, or felt attracted to a sixteen-year-old co-worker at the fast-food place where they work.
Failing the test can mean losing a job, being required to wear an electronic ankle bracelet, or ending up back in prison.
In certain states, they may even be given a “sexual history disclosure examination” that covers their entire life, which could conceivably prompt additional prosecution if other crimes are revealed.
The science-fiction world in which the government tries to read your thoughts is already upon us, and it is no great leap to assume that EEGs and fMRIs will be the next generation of tools to be exploited.
These technologies are starting to have an impact on our legal system, and those who stand to gain from using them are not going to wait for approval from the scientific establishment any more than they have with the polygraph.
We need to prepare judges to make the hard calls on admitting new lie-detection evidence.
The existing instructions just aren't up to the task.
The main problem is not the criteria the Supreme Court has established to guide courts in deciding whether expert testimony should be admissible: Is the research falsifiable and testable? Is it peer reviewed? Has it been accepted in the scientific community? What is the likely or known error rate? These are the proper questions to ask. The problem is that most judges aren't able to answer them.
In a recent survey, only 5 percent of state trial court judges were able to explain the meaning of “falsifiability,” and an even smaller percentage understood “error rate.”
Moreover, in an experiment involving Florida circuit court judges, researchers found that the judges' decisions to admit or exclude expert testimony were not based on the quality of the science at all. With numerous legally relevant scientific breakthroughs on the horizon and increasingly sophisticated methods, that is reason for serious concern.
Federal and state judiciaries should commit to the rigorous training of judges in assessing expert testimony. If a forklift operator has to reach a basic level of competency with the standard equipment for his job, why shouldn't a judge? A lack of proficiency can bring devastating consequences in both cases. And making scientific literacy mandatory doesn't demean judges; it's a testament to the importance of what they do. As we'll see in the next chapter, judicial education provides other benefits as well, and there's existing precedent.
Indeed, in the last few years, leading researchers have drafted guides to help judges handle neuroscientific evidence, and a handful of seminars have been held around the country. But we need to greatly expand and bolster this workâand we might consider starting earlier, by offering more classes at law schools focused on science in the legal sphere.
The first law and neuroscience coursebook was just published, and over twenty schools now have classesâlike the one I teachâfocused on law and the mind sciences.