http://www.wired.com/magazine/2009/12/fail_accept_defeat/all/1

Accept Defeat: The Neuroscience of Screwing Up

    * By Jonah Lehrer
    * December 21, 2009  |
    * 10:00 am  |
    * Wired Jan 2010

It all started with the sound of static. In May 1964, two astronomers
at Bell Labs, Arno Penzias and Robert Wilson, were using a radio
telescope in suburban New Jersey to search the far reaches of space.
Their aim was to make a detailed survey of radiation in the Milky Way,
which would allow them to map those vast tracts of the universe devoid
of bright stars. This meant that Penzias and Wilson needed a receiver
that was exquisitely sensitive, able to eavesdrop on all the
emptiness. And so they had retrofitted an old radio telescope,
installing amplifiers and a calibration system to make the signals
coming from space just a little bit louder.

But they made the scope too sensitive. Whenever Penzias and Wilson
aimed their dish at the sky, they picked up a persistent background
noise, a static that interfered with all of their observations. It was
an incredibly annoying technical problem, like listening to a radio
station that keeps cutting out.

At first, they assumed the noise was man-made, an emanation from
nearby New York City. But when they pointed their telescope straight
at Manhattan, the static didn’t increase. Another possibility was that
the sound was due to fallout from recent nuclear bomb tests in the
upper atmosphere. But that didn’t make sense either, since the level
of interference remained constant, even as the fallout dissipated. And
then there were the pigeons: A pair of birds were roosting in the
narrow part of the receiver, leaving a trail of what they later
described as “white dielectric material.” The scientists evicted the
pigeons and scrubbed away their mess, but the static remained, as loud
as ever.

For the next year, Penzias and Wilson tried to ignore the noise,
concentrating on observations that didn’t require cosmic silence or
perfect precision. They put aluminum tape over the metal joints, kept
the receiver as clean as possible, and hoped that a shift in the
weather might clear up the interference. They waited for the seasons
to change, and then change again, but the noise always remained,
making it impossible to find the faint radio echoes they were looking
for. Their telescope was a failure.

Kevin Dunbar is a researcher who studies how scientists study things —
how they fail and succeed. In the early 1990s, he began an
unprecedented research project: observing four biochemistry labs at
Stanford University. Philosophers have long theorized about how
science happens, but Dunbar wanted to get beyond theory. He wasn’t
satisfied with abstract models of the scientific method — that
seven-step process we teach schoolkids before the science fair — or
the dogmatic faith scientists place in logic and objectivity. Dunbar
knew that scientists often don’t think the way the textbooks say they
are supposed to. He suspected that all those philosophers of science —
from Aristotle to Karl Popper — had missed something important about
what goes on in the lab. (As Richard Feynman famously quipped,
“Philosophy of science is about as useful to scientists as ornithology
is to birds.”) So Dunbar decided to launch an “in vivo” investigation,
attempting to learn from the messiness of real experiments.

He ended up spending the next year staring at postdocs and test tubes:
The researchers were his flock, and he was the ornithologist. Dunbar
brought tape recorders into meeting rooms and loitered in the hallway;
he read grant proposals and the rough drafts of papers; he peeked at
notebooks, attended lab meetings, and videotaped interview after
interview. He spent four years analyzing the data. “I’m not sure I
appreciated what I was getting myself into,” Dunbar says. “I asked for
complete access, and I got it. But there was just so much to keep
track of.”

Dunbar came away from his in vivo studies with an unsettling insight:
Science is a deeply frustrating pursuit. Although the researchers were
mostly using established techniques, more than 50 percent of their
data was unexpected. (In some labs, the figure exceeded 75 percent.)
“The scientists had these elaborate theories about what was supposed
to happen,” Dunbar says. “But the results kept contradicting their
theories. It wasn’t uncommon for someone to spend a month on a project
and then just discard all their data because the data didn’t make
sense.” Perhaps they hoped to see a specific protein but it wasn’t
there. Or maybe their DNA sample showed the presence of an aberrant
gene. The details always changed, but the story remained the same: The
scientists were looking for X, but they found Y.

Dunbar was fascinated by these statistics. The scientific process,
after all, is supposed to be an orderly pursuit of the truth, full of
elegant hypotheses and control variables. (Twentieth-century science
philosopher Thomas Kuhn, for instance, defined normal science as the
kind of research in which “everything but the most esoteric detail of
the result is known in advance.”) However, when experiments were
observed up close — and Dunbar interviewed the scientists about even
the most trifling details — this idealized version of the lab fell
apart, replaced by an endless supply of disappointing surprises. There
were models that didn’t work and data that couldn’t be replicated and
simple studies riddled with anomalies. “These weren’t sloppy people,”
Dunbar says. “They were working in some of the finest labs in the
world. But experiments rarely tell us what we think they’re going to
tell us. That’s the dirty secret of science.”

How did the researchers cope with all this unexpected data? How did
they deal with so much failure? Dunbar realized that the vast majority
of people in the lab followed the same basic strategy. First, they
would blame the method. The surprising finding was classified as a
mere mistake; perhaps a machine malfunctioned or an enzyme had gone
stale. “The scientists were trying to explain away what they didn’t
understand,” Dunbar says. “It’s as if they didn’t want to believe it.”

The experiment would then be carefully repeated. Sometimes, the weird
blip would disappear, in which case the problem was solved. But the
weirdness usually remained, an anomaly that wouldn’t go away.

This is when things get interesting. According to Dunbar, even after
scientists had generated their “error” multiple times — it was a
consistent inconsistency — they might fail to follow it up. “Given the
amount of unexpected data in science, it’s just not feasible to pursue
everything,” Dunbar says. “People have to pick and choose what’s
interesting and what’s not, but they often choose badly.” And so the
result was tossed aside, filed in a quickly forgotten notebook. The
scientists had discovered a new fact, but they called it a failure.

The reason we’re so resistant to anomalous information — the real
reason researchers automatically assume that every unexpected result
is a stupid mistake — is rooted in the way the human brain works. Over
the past few decades, psychologists have dismantled the myth of
objectivity. The fact is, we carefully edit our reality, searching for
evidence that confirms what we already believe. Although we pretend
we’re empiricists — our views dictated by nothing but the facts —
we’re actually blinkered, especially when it comes to information that
contradicts our theories. The problem with science, then, isn’t that
most experiments fail — it’s that most failures are ignored.

As he tried to further understand how people deal with dissonant data,
Dunbar conducted some experiments of his own. In one 2003 study, he
had undergraduates at Dartmouth College watch a couple of short videos
of two different-size balls falling. The first clip showed the two
balls falling at the same rate. The second clip showed the larger ball
falling at a faster rate. The footage was a reconstruction of the
famous (and probably apocryphal) experiment performed by Galileo, in
which he dropped cannonballs of different sizes from the Tower of
Pisa. Galileo’s metal balls all landed at the exact same time — a
refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to
select the more accurate representation of gravity. Not surprisingly,
undergraduates without a physics background disagreed with Galileo.
(Intuitively, we’re all Aristotelians.) They found the two balls
falling at the same rate to be deeply unrealistic, despite the fact
that it’s how objects actually behave. Furthermore, when Dunbar
monitored the subjects in an fMRI machine, he found that showing
non-physics majors the correct video triggered a particular pattern of
brain activation: There was a squirt of blood to the anterior
cingulate cortex, a collar of tissue located in the center of the
brain. The ACC is typically associated with the perception of errors
and contradictions — neuroscientists often refer to it as part of the
“Oh shit!” circuit — so it makes sense that it would be turned on when
we watch a video of something that seems wrong.

So far, so obvious: Most undergrads are scientifically illiterate. But
Dunbar also conducted the experiment with physics majors. As expected,
their education enabled them to see the error, and for them it was the
inaccurate video that triggered the ACC.

But there’s another region of the brain that can be activated as we go
about editing reality. It’s called the dorsolateral prefrontal cortex,
or DLPFC. It’s located just behind the forehead and is one of the last
brain areas to develop in young adults. It plays a crucial role in
suppressing so-called unwanted representations, getting rid of those
thoughts that don’t square with our preconceptions. For scientists,
it’s a problem.

When physics students saw the Aristotelian video with the aberrant
balls, their DLPFCs kicked into gear and they quickly deleted the
image from their consciousness. In most contexts, this act of editing
is an essential cognitive skill. (When the DLPFC is damaged, people
often struggle to pay attention, since they can’t filter out
irrelevant stimuli.) However, when it comes to noticing anomalies, an
efficient prefrontal cortex can actually be a serious liability. The
DLPFC is constantly censoring the world, erasing facts from our
experience. If the ACC is the “Oh shit!” circuit, the DLPFC is the
Delete key. When the ACC and DLPFC “turn on together, people aren’t
just noticing that something doesn’t look right,” Dunbar says.
“They’re also inhibiting that information.”

The lesson is that not all data is created equal in our mind’s eye:
When it comes to interpreting our experiments, we see what we want to
see and disregard the rest. The physics students, for instance, didn’t
watch the video and wonder whether Galileo might be wrong. Instead,
they put their trust in theory, tuning out whatever it couldn’t
explain. Belief, in other words, is a kind of blindness.

How to Learn From Failure

Too often, we assume that a failed experiment is a wasted effort. But
not all anomalies are useless. Here’s how to make the most of them.
—J.L.

1 Check Your Assumptions

Ask yourself why this result feels like a failure. What theory does it
contradict? Maybe the hypothesis failed, not the experiment.

2 Seek Out the Ignorant

Talk to people who are unfamiliar with your experiment. Explaining
your work in simple terms may help you see it in a new light.

3 Encourage Diversity

If everyone working on a problem speaks the same language, then
everyone has the same set of assumptions.

4 Beware of Failure-Blindness

It’s normal to filter out information that contradicts our
preconceptions. The only way to avoid that bias is to be aware of it.

But this research raises an obvious question: If humans — scientists
included — are apt to cling to their beliefs, why is science so
successful? How do our theories ever change? How do we learn to
reinterpret a failure so we can see the answer?

This was the challenge facing Penzias and Wilson as they tinkered with
their radio telescope. Their background noise was still inexplicable,
but it was getting harder to ignore, if only because it was always
there. After a year of trying to erase the static, after assuming it
was just a mechanical malfunction, an irrelevant artifact, or pigeon
guano, Penzias and Wilson began exploring the possibility that it was
real. Perhaps it was everywhere for a reason.

In 1918, sociologist Thorstein Veblen was commissioned by a popular
magazine devoted to American Jewry to write an essay on how Jewish
“intellectual productivity” would be changed if Jews were given a
homeland. At the time, Zionism was becoming a potent political
movement, and the magazine editor assumed that Veblen would make the
obvious argument: A Jewish state would lead to an intellectual boom,
as Jews would no longer be held back by institutional anti-Semitism.
But Veblen, always the provocateur, turned the premise on its head. He
argued instead that the scientific achievements of Jews — at the time,
Albert Einstein was about to win the Nobel Prize and Sigmund Freud was
a best-selling author — were due largely to their marginal status. In
other words, persecution wasn’t holding the Jewish community back — it
was pushing it forward.

The reason, according to Veblen, was that Jews were perpetual
outsiders, which filled them with a “skeptical animus.” Because they
had no vested interest in “the alien lines of gentile inquiry,” they
were able to question everything, even the most cherished of
assumptions. Just look at Einstein, who did much of his most radical
work as a lowly patent clerk in Bern, Switzerland. According to
Veblen’s logic, if Einstein had gotten tenure at an elite German
university, he would have become just another physics professor with a
vested interest in the space-time status quo. He would never have
noticed the anomalies that led him to develop the theory of
relativity.

Predictably, Veblen’s essay was potentially controversial, and not
just because he was a Lutheran from Wisconsin. The magazine editor
evidently was not pleased; Veblen could be seen as an apologist for
anti-Semitism. But his larger point is crucial: There are advantages
to thinking on the margin. When we look at a problem from the outside,
we’re more likely to notice what doesn’t work. Instead of suppressing
the unexpected, shunting it aside with our “Oh shit!” circuit and
Delete key, we can take the mistake seriously. A new theory emerges
from the ashes of our surprise.

Modern science is populated by expert insiders, schooled in narrow
disciplines. Researchers have all studied the same thick textbooks,
which make the world of fact seem settled. This led Kuhn, the
philosopher of science, to argue that the only scientists capable of
acknowledging the anomalies — and thus shifting paradigms and starting
revolutions — are “either very young or very new to the field.” In
other words, they are classic outsiders, naive and untenured. They
aren’t inhibited from noticing the failures that point toward new
possibilities.

But Dunbar, who had spent all those years watching Stanford scientists
struggle and fail, realized that the romantic narrative of the
brilliant and perceptive newcomer left something out. After all, most
scientific change isn’t abrupt and dramatic; revolutions are rare.
Instead, the epiphanies of modern science tend to be subtle and
obscure and often come from researchers safely ensconced on the
inside. “These aren’t Einstein figures, working from the outside,”
Dunbar says. “These are the guys with big NIH grants.” How do they
overcome failure-blindness?

While the scientific process is typically seen as a lonely pursuit —
researchers solve problems by themselves — Dunbar found that most new
scientific ideas emerged from lab meetings, those weekly sessions in
which people publicly present their data. Interestingly, the most
important element of the lab meeting wasn’t the presentation — it was
the debate that followed. Dunbar observed that the skeptical (and
sometimes heated) questions asked during a group session frequently
triggered breakthroughs, as the scientists were forced to reconsider
data they’d previously ignored. The new theory was a product of
spontaneous conversation, not solitude; a single bracing query was
enough to turn scientists into temporary outsiders, able to look anew
at their own work.

But not every lab meeting was equally effective. Dunbar tells the
story of two labs that both ran into the same experimental problem:
The proteins they were trying to measure were sticking to a filter,
making it impossible to analyze the data. “One of the labs was full of
people from different backgrounds,” Dunbar says. “They had biochemists
and molecular biologists and geneticists and students in medical
school.” The other lab, in contrast, was made up of E. coli experts.
“They knew more about E. coli than anyone else, but that was what they
knew,” he says. Dunbar watched how each of these labs dealt with their
protein problem. The E. coli group took a brute-force approach,
spending several weeks methodically testing various fixes. “It was
extremely inefficient,” Dunbar says. “They eventually solved it, but
they wasted a lot of valuable time.”

The diverse lab, in contrast, mulled the problem at a group meeting.
None of the scientists were protein experts, so they began a
wide-ranging discussion of possible solutions. At first, the
conversation seemed rather useless. But then, as the chemists traded
ideas with the biologists and the biologists bounced ideas off the med
students, potential answers began to emerge. “After another 10 minutes
of talking, the protein problem was solved,” Dunbar says. “They made
it look easy.”

When Dunbar reviewed the transcripts of the meeting, he found that the
intellectual mix generated a distinct type of interaction in which the
scientists were forced to rely on metaphors and analogies to express
themselves. (That’s because, unlike the E. coli group, the second lab
lacked a specialized language that everyone could understand.) These
abstractions proved essential for problem-solving, as they encouraged
the scientists to reconsider their assumptions. Having to explain the
problem to someone else forced them to think, if only for a moment,
like an intellectual on the margins, filled with self-skepticism.

This is why other people are so helpful: They shock us out of our
cognitive box. “I saw this happen all the time,” Dunbar says. “A
scientist would be trying to describe their approach, and they’d be
getting a little defensive, and then they’d get this quizzical look on
their face. It was like they’d finally understood what was important.”

What turned out to be so important, of course, was the unexpected
result, the experimental error that felt like a failure. The answer
had been there all along — it was just obscured by the imperfect
theory, rendered invisible by our small-minded brain. It’s not until
we talk to a colleague or translate our idea into an analogy that we
glimpse the meaning in our mistake. Bob Dylan, in other words, was
right: There’s no success quite like failure.

For the radio astronomers, the breakthrough was the result of a casual
conversation with an outsider. Penzias had been referred by a
colleague to Robert Dicke, a Princeton scientist whose training had
been not in astrophysics but nuclear physics. He was best known for
his work on radar systems during World War II. Dicke had since become
interested in applying his radar technology to astronomy; he was
especially drawn to a then-strange theory called the big bang, which
postulated that the cosmos had started with a primordial explosion.
Such a blast would have been so massive, Dicke argued, that it would
have littered the entire universe with cosmic shrapnel, the
radioactive residue of genesis. (This proposal was first made in 1948
by physicists George Gamow, Ralph Alpher, and Robert Herman, although
it had been largely forgotten by the astronomical community.) The
problem for Dicke was that he couldn’t find this residue using
standard telescopes, so he was planning to build his own dish less
than an hour’s drive south of the Bell Labs one.

Then, in early 1965, Penzias picked up the phone and called Dicke. He
wanted to know if the renowned radar and radio telescope expert could
help explain the persistent noise bedeviling them. Perhaps he knew
where it was coming from? Dicke’s reaction was instantaneous: “Boys,
we’ve been scooped!” he said. Someone else had found what he’d been
searching for: the radiation left over from the beginning of the
universe. It had been an incredibly frustrating process for Penzias
and Wilson. They’d been consumed by the technical problem and had
spent way too much time cleaning up pigeon shit — but they had finally
found an explanation for the static. Their failure was the answer to a
different question.

And all that frustration paid off: In 1978, they received the Nobel
Prize for physics.

Contributing editor Jonah Lehrer (jonah.leh...@gmail.com) wrote about
how our friends affect our health in issue 17.10.


-- 
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

Reply via email to