U.S. Senate Committee on Environment & Public Works
Hearing Statements  
Date:   09/28/2005 
  
Statement of Michael Crichton, M.D.
Author, Doctor
The Role of Science in Environmental Policy-Making  

---------------------------------------------------------------------
-----------
Thank you Mr. Chairman, and members of the Committee. I am Michael 
Crichton, known to most people as the author of Jurassic Park and 
the creator of the television series ER. My academic background 
includes degrees from Harvard College and Harvard Medical School; I 
was a visiting lecturer in Physical Anthropology at Cambridge 
University; and a post-doctoral fellow at the Salk Institute, where 
I worked on media and science policy with Jacob Bronowski. 

My recent novel State of Fear concerns the politicization of 
scientific research. I appreciate the opportunity to discuss this 
subject. What I would like to emphasize to the committee today is 
the importance of independent verification to science.


In essence, science is nothing more than a method of inquiry. The 
method says an assertion is valid—and will be universally accepted—
only if it can be reproduced by others, and thereby independently 
verified. The impersonal rigor of the method has produced enormously 
powerful results for 400 years. 

The scientific method is utterly apolitical. A truth in science is 
verifiable whether you are black or white, male or female, old or 
young. It's verifiable whether you know the experimenter, or whether 
you don't. It's verifiable whether you like the results of a study, 
or you don't. 

Thus, when adhered to, the scientific method can transcend politics. 
Unfortunately, the converse may also be true: when politics takes 
precedent over content, it is often because the primacy of 
independent verification has been abandoned. 

Verification may take several forms. I come from medicine, where the 
gold standard is the randomized double-blind study. Not every study 
is conducted in this way, but it is held up as the ultimate goal.

In that vein, let me tell you a story. It's 1991, I am flying home 
from Germany, sitting next to a man who is almost in tears, he is so 
upset. He's a physician involved in an FDA study of a new drug. It's 
a double-blind study involving four separate teams---one plans the 
study, another administers the drug to patients, a third assess the 
effect on patients, and a fourth analyzes results. The teams do not 
know each other, and are prohibited from personal contact of any 
sort, on peril of contaminating the results. This man had been 
sitting in the Frankfurt airport, innocently chatting with another 
man, when they discovered to their mutual horror they are on two 
different teams studying the same drug. They were required to report 
their encounter to the FDA. And my companion was now waiting to see 
if the FDA would declare their multi-year, multi-million-dollar 
study invalid because of this contact.


For a person with a medical background, accustomed to this degree of 
rigor in research, the protocols of climate science appear 
considerably more relaxed. A striking feature of climate science is 
that it's permissible for raw data to be "touched," or modified, by 
many hands. Gaps in temperature and proxy records are filled in. 
Suspect values are deleted because a scientist deems them erroneous. 
A researcher may elect to use parts of existing records, ignoring 
other parts. Sometimes these adjustments are necessary, sometimes 
they are questionable. Sometimes the adjustments are documented, 
sometimes not. But the fact that the data has been modified in so 
many ways inevitably raises the question of whether the results of a 
given study are wholly or partially caused by the modifications 
themselves. 

In saying this, I am not casting aspersions on the motives or fair-
mindedness of climate scientists. Rather, what is at issue is 
whether the methodology of climate science is sufficiently rigorous 
to yield a reliable result. At the very least, we should want the 
reassurance of independent verification by another lab, in which 
they make their own decisions about how to handle data, and yet 
arrive at a similar conclusion. 

Because any study where a single team plans the research, carries it 
out, supervises the analysis, and writes their own final report, 
carries a very high risk of undetected bias. That risk, for example, 
would automatically preclude the validity of the results of a 
similarly structured study that tested the efficacy of a drug. 
Nobody would believe it. 

By the same token, it would be unacceptable if the subseqent 
verification of such a study were conducted by investigators with 
whom the researcher had a professional relationship—people with 
whom, for example, he had published papers in the past. That's peer 
review by pals, and it's unavoidably biased. Yet these issues are 
central to the now-familiar story of the "Hockeystick graph" and the 
debate surrounding it. 

To summarize it briefly: in 1998-99 the American climate researcher 
Michael Mann and his co-workers published an estimate of global 
temperatures from the year 1000 to 1980. Mann's results appeared to 
show a spike in recent temperatures that was unprecedented in the 
last thousand years. His alarming report received widespread 
publicity and formed the centerpiece of the U.N.'s Third Assessment 
Report, in 2001. The graph appeared on the first page of the IPCC 
Executive Summary. 

Mann's work was initially criticized because his graph didn't show 
the well-known Medieval Warm Period, when temperatures were warmer 
than they are today, or the Little Ice Age, when they were colder 
than today. But real fireworks began when two Canadian researchers, 
McIntyre and McKitrick, attempted to replicate Mann's study. They 
found grave errors in the work, which they detailed in 2003: 
calculation errors, data used twice, and a computer program that 
generated a hockeystick out of any data fed to it—even random data. 

Mann's work has been dismissed as "phony" and "rubbish" by climate 
scientists around the world who subscribe to global warming. Some 
have asked why the UN accepted Mann's report so uncritically. It is 
unsettling to learn Mann himself was in charge of the section of the 
report that included his work. This episode of climate science is 
far from the standards of independent verification. 

The hockeystick controversy drags on. But I would direct the 
Committee's attention to three aspects of this story. First, six 
years passed between Mann's publication and the first detailed 
accounts of errors in his work. This is simply too long for 
policymakers to wait for validated results. Particularly if it is 
going to be shown around the world in the meantime. 

Second, the flaws in Mann's work were not caught by climate 
scientists, but rather by outsiders—in this case, an economist and a 
mathematician. McIntyre and McKitrick had to go to great lengths to 
obtain the data from Mann's team, which obstructed them at every 
turn. When the Canadians sought help from the NSF, they were told 
that Mann was under no obligation to provide his data to other 
researchers for independent verification. 

Third, this kind of stonewalling is not unique or uncommon. The 
Canadians are now attempting to replicate other climate studies and 
are getting the same runaround from other researchers. One leading 
light in the field told them: "Why should I make the data available 
to you, when your aim is to try and find something wrong with it." 

Even further, some scientists complain the task of archiving is so 
time-consuming as to prevent them from getting any work done. But 
this is nonsense. 

The first research paper I worked on, back in the 1960s, consisted 
of data on stacks of paper. When we received a request for data from 
another lab, I stood at a Xerox machine, copying one page a minute—
at 11 cents a page!—for several hours. Back in those days, a request 
for data meant a lot of work. 

But today we can burn data to a CD, or post it at an ftp site for 
downloading. Archiving data is so easy it should have become 
standard practice a decade ago. Government grants should require 
a "replication package" as part of funding. Posting the package 
online should be a prerequisite to journal publication. And since 
it's so easy, there's really no reason to exclude anyone from 
reviewing the data. 

One problem with replication is this: while it can tell you a 
research result is faulty, it can't tell you what the right answer 
is. Policymakers need sound answers to the questions they ask. A 
better way to get them might be to give research grants for 
important projects to three independent teams simultaneously. A 
provision of the grant would be that at the end of the study period, 
all three papers would be published together, with each group 
commenting on the findings of the other. I believe this would be the 
fastest way to get verified answers to important questions. 

But if independent verification is the heart of science, what should 
policymakers do with research that is unverifiable? For example, the 
UN Third Assessment Report defines general circulation climate 
models as unverifiable. If that's true, are their predictions of any 
use to policymakers? 

Arguably not. In 2000, Christopher Landsea and co-workers studied 
various computer models that had forecast the strong El Nino event 
of 1997-98. They concluded that the older, simpler models—hardly 
more than simple formulae—had performed much better than the global 
circulation models when predicting the arrival and strength of the 
El Nino. 

If policymakers decide to weight their decisions in favor of 
verified research, that will provoke an effort by climate scientists 
to demonstrate their concerns using objectively verifiable research. 

In closing, I want to state emphatically that nothing in my remarks 
should be taken to imply that we can ignore our environment, or that 
we should not take climate change seriously. On the contrary, we 
must dramatically improve our record on environmental management. 
That's why a focused effort on climate science, aimed at securing 
sound, independently verified answers to policy questions, is so 
important now. 

I would remind the committee that in the end, it is the proper 
function of government to set standards for the integrity of 
information it uses to make policy, and to ensure that standards are 
maintained. Those who argue government should refrain from mandating 
quality standards for scientific research—including some 
professional organizations—are merely self-serving. In an 
information society, public safety depends on the integrity of 
public information. And only government can perform that task. 


 
# # # # # 






------------------------ Yahoo! Groups Sponsor --------------------~--> 
Get fast access to your favorite Yahoo! Groups. Make Yahoo! your home page
http://us.click.yahoo.com/dpRU5A/wUILAA/yQLSAA/JjtolB/TM
--------------------------------------------------------------------~-> 

To subscribe, send a message to:
[EMAIL PROTECTED]

Or go to: 
http://groups.yahoo.com/group/FairfieldLife/
and click 'Join This Group!' 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/FairfieldLife/

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 


Reply via email to