Super Crunchers (15 page)

Read Super Crunchers Online

Authors: Ian Ayres

BOOK: Super Crunchers
2.76Mb size Format: txt, pdf, ePub

When I asked Ted Ruger whether he thought he could outpredict the computer algorithm if he had access to the computer's prediction, he caught himself sliding once again into the trap of overconfidence. “I
should
be able to beat it,” he started, but then corrected himself. “But maybe not. I wouldn't really know what my thought process would be. I'd look at the model and then I would sort of say, well, what could I do better here? I would probably muck it up in a lot of cases.”

Evidence is mounting in favor of a different and much more de-meaning, dehumanizing mechanism for combining expert and Super Crunching expertise. In several studies, the most accurate way to exploit traditional expertise is to merely add the expert evaluation as an additional factor in the statistical algorithm. Ted's Supreme Court study, for example, suggests that a computer that had access to human predictions would rely on the experts to determine the votes of the more liberal justices (Breyer, Ginsburg, Souter, and Stevens)—because the unaided experts outperformed the Super Crunching algorithm in predicting the votes of these justices.

Instead of having the statistics as a servant to expert choice, the expert becomes a servant of the statistical machine. Mark E. Nissen, a professor at the Naval Postgraduate School in Monterey, California, who has tested computer-versus-human procurement, sees a fundamental shift toward systems where the traditional expert is stripped of his or her power to make the final decision. “The newest space, and the one that's most exciting, is where machines are actually in charge,” he said, “but they have enough awareness to seek out people to help them when they get stuck.” It's best to have the man and machine in dialogue with each other, but, when the two disagree, it's usually better to give the ultimate decision to the statistical prediction.

The decline of expert discretion is particularly pronounced in the case of parole. In the last twenty-five years, eighteen states have replaced their parole systems with sentencing guidelines. And those states that retain parole have shifted their systems to rely increasingly on Super Crunching risk assessments of recidivism. Just as your credit score powerfully predicts the likelihood that you will repay a loan, parole boards now have externally validated predictions framed as numerical scores in formula like the VRAG (Violence Risk Appraisal Guide), which estimates the probability that a released inmate will commit a violent crime. Still, even reduced discretion can give rise to serious risk when humans deviate from the statistically prescribed course of action.

Consider the worrisome case of Paul Herman Clouston. For over fifty years, Clouston has been in and out of prison in several states for everything from auto theft and burglary to escape. In 1972, he was convicted of murdering a police officer in California. In 1994, he was convicted in Virginia of aggravated sexual battery, abduction, and sodomy, and of assaulting juveniles in James City County, Virginia. He had been serving time in a Virginia penitentiary until April 15, 2005, when he was released on mandatory parole six months before the end of his nominal sentence.

As soon as Clouston hit the streets, he fled. He failed to report for parole and failed to register as a violent sex offender. He is now one of the most wanted men in Virginia. He is on the U.S. Marshals' Most Wanted list and was recently featured on
America's Most Wanted
. But why did this seventy-one-year-old, who had served his time, flee and why did he make all of these most wanted lists?

The answer to both questions is the SVPA. In April of 2003, Virginia became the sixteenth state in our nation to enact a “Sexually Violent Predator Act” (SVPA). Under this extraordinary statute, an offender, after serving his full sentence, can be found to be a “sexually violent predator” and subject to civil commitment in a state mental hospital until a judge is satisfied he no longer presents an undue risk to public safety.

Clouston probably fled because he was worried that he would be adjudged to be a sexual predator (defined in the statute as someone “who suffers from a mental abnormality or personality disorder which makes the person likely to engage in the predatory acts of sexual violence”). And the state made Clouston “most wanted” for the very same reason.

The state was also embarrassed that Clouston had ever been released in the first place. You see, Virginia's version of the SVPA contained a Super Crunching innovation. The statute itself included a “tripwire” that automatically sets the commitment process in motion if a Super Crunching algorithm predicted that the inmate had a high risk of sexual offense recidivism. Under the statute, commissioners of the Virginia Department of Corrections were directed to review for possible commitment all prisoners about to be released who, and I'm quoting the statute here, “receive a score of four or more on the Rapid Risk Assessment for Sexual Offender Recidivism.” The Rapid Risk Assessment for Sexual Offender Recidivism (RRASOR) is a point system based on a regression analysis of male offenders in Canada. A score of four or more on the RRASOR translates into a prediction that the inmate, if released, would in the next ten years have a 55 percent chance of committing another sex offense.

The Supreme Court in a 5–4 decision has upheld the constitutionality of prior SVPAs—finding that indefinite
civil
commitment of former inmates does not violate the Constitution. What's amazing about the Virginia statute is that it uses Super Crunching to trigger the commitment process. John Monahan, a leading expert in the use of risk-assessment instruments, notes, “Virginia's sexually violent predator statute is the first law ever to specify, in black letter, the use of a named actuarial prediction instrument
and an exact cut-off score
on that instrument.”

Clouston probably never should have been released because he had a RRASOR score of four. The state has refused to comment on whether they failed to assess Clouston's RRASOR score as directed by the statute or whether the committee reviewing his case chose to release him notwithstanding the statistical prediction of recidivism. Either way, the Clouston story seems to be one where human discretion led to the error of his release.

It was a mistake, that is, if we trust the RRASOR prediction. Before rushing to this conclusion, however, it's worthwhile to look at what exactly qualified Clouston as a four on the RRASOR scale. The RRASOR system—pronounced “razor,” as in Occam's razor—is based on just the four factors listed below:

1. Prior sexual offenses

None

0

1 conviction or 1–2 charges

1

2–3 convictions or 3–5 charges

2

4+ convictions or 6+ charges

3

2. Age of release (current age)

More than 25

0

Less than 25

1

3. Victim gender

Only females

0

Any males

1

4. Relationship to victim

Only related

0

Any nonrelated

1

SOURCE
: John Monahan and Laurens Walker,
Social Science in Law: Cases and Materials
(2006).

Clouston would receive one point for victimizing a male, one for victimizing a nonrelative, and two more because he had three previous sex-offense charges. It's hard to feel any pity for Clouston, but this seventy-one-year-old could be funneled toward lifetime commitment based in part upon crimes for which he'd never been convicted. What's more, this statutory trigger expressly discriminates based on the sex of his victims. These factors are not chosen to assess the relative blameworthiness of different inmates. They are solely about predicting the likelihood of recidivism. If it turned out that wholly innocent conduct (putting barbecue sauce on ice cream) had a statistically valid, positive correlation with recidivism, the RRASOR system at least in theory would condition points on such behavior.

This Super Crunching cutoff of course doesn't mandate civil commitment; it just mandates that humans consider whether he should be committed as a “sexually violent predator.” State officials in exercising this decision not infrequently wave off the Super Crunching prediction. Since the statute was passed, the attorney general's office has sought commitments against only about 70 percent of the inmates who scored a four or more on the risk assessment, and only about 70 percent of the time have courts granted the state's petition to commit these inmates.

The Virginia statute thus channels discretion, but it does not obliterate it. To cede complete decision-making power to lock up a human to a statistical algorithm is in many ways unthinkable. Complete deference to statistical prediction in this or other contexts would almost certainly lead to the odd decision that at times we “know” is going to be wrong. Indeed, Paul Meehl long ago worried about the “case of the broken leg.” Imagine that a Super Cruncher is trying to predict whether individuals will go to the movies on a certain night. The Super Crunching formula might predict on the basis of twenty-five statistically validated factors that Professor Brown has an 84 percent probability of going to a movie next Friday night. Now suppose that we also learn that Brown has a compound leg fracture from an accident a few days ago and is immobilized in a hip cast.

Meehl understood that it would be absurd to rely on the actuarial prediction in the face of this new piece of information. By solely relying on the regression or even relegating the expert's opinion to merely being an additional input to this regression, we are likely to make the wrong decision. A statistical procedure cannot estimate the causal impact of rare events (like broken legs) because there simply aren't enough data concerning them to make a credible estimate. The rarity of the event doesn't mean that it will not have a big impact when the event does in fact occur. It just means that statistical formulas will not be able to capture the impact. If we really care about making an accurate prediction in such circumstances, we need to have some kind of discretionary escape hatch—some way for a human to override the prediction of the formula.

The problem is that these discretionary escape hatches have costs too. “People see broken legs everywhere,” Snijders says, “even when they are not there.” The Mercury astronauts insisted on a literal escape hatch. They balked at the idea of being bolted inside a capsule that could only be opened from the outside. They demanded discretion. However, it was discretion that gave Liberty Bell 7 astronaut Gus Grissom the opportunity to panic upon splashdown. In Tom Wolfe's memorable account, Grissom “screwed the pooch” when he prematurely blew the seventy explosive bolts securing the hatch before the Navy SEALs were able to secure floats. The space capsule sank and Gus nearly drowned.

System builders must carefully consider the costs as well as the benefits of delegating discretion. In context after context, decision makers who wave off the statistical predictions tend to make poorer decisions. The expert override doesn't do worse when a true broken leg event occurs. Still, experts are overconfident in their ability to beat the system. We tend to think that the restraints are useful for the other guy but not for us. So we don't limit our overrides to the clear cases where the formula is wrong; we override where we think we know better. And that's when we get in trouble. Parole and Civil Commitment boards that make exceptions to the statistical algorithm and release inmates who are predicted to have a high probability of violence tend time and again to find that the high probability parolees have higher recidivism rates than those predicted to have a low probability. Indeed, in Virginia only one man out of the dozens civilly committed under the SVPA has ever been subsequently released by a judge who found him—notwithstanding his RRASOR score—no longer to be a risk to society. Once freed, this man abducted and sodomized a child and now is serving a new prison sentence.

There is an important cognitive asymmetry here. Ceding complete control to a statistical formula inevitably will give rise to the intolerable result of making some decisions that reason tells must be wrong. The “broken leg” hypothetical is cute, but unconditional adherence to statistical formulas will lead to powerful examples of tragedy—organs being transplanted into people we know can't use them. These rare but salient anecdotes will loom large in our consciousness. It's harder to keep in mind evidence that discretionary systems, where experts are allowed to override the statistical algorithms, tend to do worse.

What does all this mean for human flourishing? If we care solely about getting the best decisions overall, there are many contexts where we need to relegate experts to mere supporting roles in the decision-making process. We, like the Mercury astronauts, probably can't tolerate a system that forgoes any possibility of human override. At a minimum, however, we should keep track of how experts fare when they wave off the suggestions of the formulas. The broken leg hypothetical teaches us that there will, of course, be unusual circumstances where we'll have good reason for ignoring statistical prediction and going with what our gut and our reason tell us to do. Yet we also need to keep an eye on how often we get it wrong and try to limit our own discretion to places where we do better than machines. “It is critical that the level, type, and circumstances of over-ride usage be monitored on an ongoing basis,” University of Massachusetts criminologists James Byrne and April Pattavina wrote recently. “A simple rule of thumb for this type of review is to apply a 10 percent rule: if more than 10 percent of the agency's risk scoring decisions are being changed, then the agency has a problem in this area that needs to be resolved.” They want to make sure that override is limited to fairly rare circumstances. I'd propose instead that if more than half of the overrides are getting it wrong, then humans, like the Mercury astronauts, are overriding too much.

Other books

Dread Brass Shadows by Glen Cook
Too Busy for Your Own Good by Connie Merritt
Breaking All the Rules by Aliyah Burke
The Last Lovely City by Alice Adams
The Cake House by Latifah Salom
The Devil's Fire by Matt Tomerlin
Who bombed the Hilton? by Rachel Landers