Excellent article on statistical algorithms outperforming experts in
making predictions.

http://www.ft.com/cms/s/0/44f39c1c-5824-11dc-8c65-0000779fd2ac.html
----------------------------------------snip
Six years ago, Ted Ruger, a law professor at the University of
Pennsylvania, attended a seminar at which two political scientists,
Andrew Martin and Kevin Quinn, made a bold claim. They said that by
using just a few variables concerning the politics of the case, they
could predict how the US Supreme Court justices would vote.

Ruger wasn't buying it.... After the seminar he went up to them with a
suggestion: why didn't they run the test forward?...........

The test would implicate some of the most basic questions of what law
is. In 1881, Justice Oliver Wendell Holmes created the idea of legal
positivism by announcing: "The life of the law has not been logic; it
has been experience." For him, the law was nothing more than "a
prediction of what judges in fact will do". He rejected the view of
Harvard's dean at the time, Christopher Columbus Langdell, who said
that "law is a science, and... all the available materials of that
science are contained in printed books".
...............

The experts lost. For every argued case during the 2002 term, the
model predicted 75 per cent of the court's affirm/reverse results
correctly, while the legal experts collectively got only 59.1 per cent
right. The computer was particularly effective at predicting the
crucial swing votes of Justices O'Connor and Anthony Kennedy. The
model predicted O'Connor's vote correctly 70 per cent of the time
while the experts' success rate was only 61 per cent.

How can it be that an incredibly stripped-down statistical model
outpredicted legal experts with access to detailed information about
the cases?.... The short answer is that Ruger's test is representative
of a much wider phenomenon. Since the 1950s, social scientists have
been comparing the predictive accuracies of number crunchers and
traditional experts - and finding that statistical models consistently
outpredict experts. But now that revelation has become a revolution in
which companies, investors and policymakers use analysis of huge
datasets to discover empirical correlations between seemingly
unrelated things. Want to hedge a large purchase of euros? Turns out
you should sell a carefully balanced portfolio of 26 other stocks and
commodities that might include some shares in Wal-Mart.....

Instead of simply throwing away the know-how of experts, wouldn't it
be better to combine super crunching and experiential knowledge? Can't
the two types of knowledge peacefully coexist? There is some evidence
to support this possibility. Indeed, traditional experts are shown to
make better decisions when they are provided with the results of
statistical prediction.

But evidence is mounting in favour of a different and much more
dehumanising mechanism for combining human and super-crunching
expertise. Several studies have shown that the most accurate way to
exploit traditional expertise is merely to add the expert evaluation
as an additional factor in the statistical algorithm.....

Instead of having the statistics as a servant to expert choice, the
expert becomes a servant of the statistical machine..... It's best to
have the man and machine in dialogue with each other, but, when the
two disagree, it's usually better to give the ultimate decision to the
statistical prediction.

The decline of expert discretion is particularly pronounced in the
case of parole. In the past 25 years, 18 states have replaced their
parole systems with sentencing guidelines. And those states that
retain parole have shifted their systems to rely increasingly on
super-crunching risk assessments of recidivism. Just as your credit
score powerfully predicts the likelihood that you will repay a loan,
parole boards now have externally validated predictions framed as
numerical scores in formula. Still, even reduced discretion can give
rise to serious risk when humans deviate from the statistically
prescribed course of action.

Consider the case of Paul Herman Clouston.... He had been serving time
in a Virginia penitentiary until April 15 2005, when he was released
on mandatory parole six months before the end of his nominal sentence.
As soon as Clouston hit the streets, he fled....

Virginia made Clouston "most wanted" for the same reason - and because
it was embarrassed that Clouston had been released..... The Rapid Risk
Assessment for Sexual Offender Recidivism (RRASOR, and pronounced
"razor") is a points system based on a regression analysis of male
offenders in Canada. A score of four or more on the RRASOR translates
into a prediction that the inmate, if released, would in the next 10
years have a 55 per cent chance of committing another sex offence....

Either way, the Clouston story seems to be one where human discretion
led to the error of his release.

Reply via email to