Re: [ECOLOG-L] Ecology and Theory and Evidence and Limitations and so on Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread Wayne Tyson
Maybe this is an oversimplification, and I readily admit to being in 
a fog about the long list of "conceptual advances" (however 
useful--in their fashion), but isn't the point of all of them to 
strain, without mercy, the biases out of hypotheses and let the light 
of reality shine through, no matter what the result?


WT

At 10:16 AM 2/21/2008, Gary Grossman wrote:

It would be interesting to have this discussion after reading Don Strong's
1980 seminal paper "Null Hypotheses in Ecology" Synthese 43:271-285 .
Although I use AIC in my own research (see Grossman et al. 2006 Ecol
Monog.76:217), IMO Anderson, Johnson and others have thrown out the baby
with the bath water when they state that null hypotheses are trivial in
ecology.  In fact, the whole neutral model approach in ecology really is
based on null hypotheses, and it has been one of the most productive areas
in ecology since the 80's (see the great book by Gotelli and Green, Null
Models in Ecology).  Prior to those conceptual advances we had "models"  (
i.e. the competitionist model) and many investigators worked hard to twist
their data to fit the "model" (really it could be argued that the
development of neutral models were a paradigm shift in the Kuhnian sense).
Frankly, frequentist, information-theoretic, and Bayesian approaches all
have their place in ecology and we should just get over, trashing
frequentist approaches.  To twist a phrase "Statistics don't misuse data,
People misuse data" .  Frankly, to suggest that information-theoretic
approaches are less arbitrary because they don't use cut-off values is
inappropriate, because cut off values are used for the interpretation of wi
values and DeltaAIC values.  Nonetheless, weight of evidence approaches are
fantastic tools for ecology, but they are not the end-all and be-all for our
field.  There have been several back and forth exchanges in the literature
over the last 5-6 years regarding these points so I won't belabor them here.


cheers,

--
Gary D. Grossman


Distinguished Research Professor - Animal Ecology
Warnell School of Forestry & Natural Resources
University of Georgia
Athens, GA, USA 30602

http://www.arches.uga.edu/~grossman

Board of Editors - Animal Biodiversity and Conservation
Editorial Board - Freshwater Biology
Editorial Board - Ecology Freshwater Fish


Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread Wirt Atmar
Jeff Houlahan writes:

> That said, science is many things - 'a predictive
> enterprise, not some form of mindless after-the-fact exercise in number
> crunching.' - fits under the umbrella but I don't think captures the whole
> enterprise.  Sequencing the human genome was, in my opinion, a version of
> mindless number crunching (although perhaps somebody can put that effort in a
> hypothesis testing context that I haven't thought of).  I think most people
> would be hard pressed to say it wasn't science.  In fact, there is an
> emerging field of statistics (data mining) that seems to be useful in
> developing scientific hypotheses and is all about the 'mindless after-the-
> fact exercise in number crunching'.  My feeling is that data can provide
> hypotheses or test them.  When it does the first, it is a very useful part of
> science but it is not predictive and it does not test hypotheses (null,
> competing or otherwise).  When it does the latter it falls ito the category
> that Feynman was describing.

> I think the reason we often get these trivial tests of hypotheses is because
> there is this sense that science is only about testing hypotheses - therefore
> to do science I must test a hypothesis...whether there is a meaningful one or
> not.  In my opinion, science can also just be about looking for patterns that
> we can use to suggest hypotheses.

These kinds of discussions can quickly become pretentious, but I don't want this
one to become so. There is a deep joy associated with doing science, and you're
right of course, science isn't purely about prediction. It also has an
exploratory component to it as well, where you go over the mountain just to see
what you can see.

Nonetheless, prediction is still our only measure of how well we understand the
world around us, and no one has ever said that making these predictions was
supposed to be easy. If it were, none of us would be getting the large salaries
we're being paid.

Nonetheless, your last sentence strikes a deep resonant chord with me. No one
agrees more than I do that to do science we must seek out patterns. Robert H.
MacArthur's first line in his 1972 book, "Geographical Ecology," is: "To do
science is to search for repeated patterns, not simply to accumulate facts, and
to do the science of geographical ecology is to search for patterns of plant and
animal life that can be put on a map."

But seeking out these patterns is only the beginning. The Latin word "scientia"
is generally translated as "knowledge," but I much prefer to translate it as
"understanding," it's alternate meaning, and there is a difference between the
two. Understanding is by far the higher state of grace.

There is only one science, regardless of what subdiscipline you engage in, and
your statement that perceived patterns in the data can be used to suggest
hypotheses has been said a hundred times before. It was certainly said most
clearly by the astronomer Alan Sandage in the first few paragraphs of his 1975
book, "Galaxies and the Universe." Indeed, he precisely recapitulates your last
sentence in his first paragraph:

"The first step in the development of most sciences is a classification of the
objects under study. Its purpose is to look for patterns from which hypotheses
that connect things and events can be formulated by a method proposed and used
by Bacon (1620). If the classification is useful, the hypotheses lead to
predictions which, if verified, help to form the theoretical foundations of a
subject."

But he goes on to quite rightly say that doing just this is insufficient to
doing science. In the end, we want to understand causation and mechanism. We
want to understand the rules -- the physics -- that governs the system under
study. We haven't done our job until we do achieve this understanding.

Sandage continues:

"Simple description, although not sufficient as a final system, is often an
important first step... But as a classification develops, a next step is often
to group the objects of a set into classes according to some continuously
varying parameter. If the parameter proves to be physically important, then the
classification itself becomes fundamental, and often leads quite directly to the
theoretical concepts."

Ecology has had this large psychological penduluum that has swung through its
core over the last several decades. I first became involved in ecological
research during the time of "systems ecology," in the late 1960's and early
1970's, a time of Lotka, Volterra, MacArthur, Slobokin and Hutchinson, and I was
greatly entranced by the idea that there are rules that govern the interaction
of life on this planet.

But I was also impressed at the time that the psychological attitude then seemed
to recapitulate that of The Golden Age of Reason, a time when Newton's laws of
motion were first being introduced into Europe, where for the first time the
world began to make sense, to the point that poetry was written about the
effect:

   Nature and nature'

Re: [ECOLOG-L] Anderson's new book, "Model Based Inference in the Life Sciences"

2008-02-21 Thread Volker Bahn

Wirt Atmar wrote:
In 1964, Richard Feynman, in a lecture to students at Cornell that's 
available

on YouTube, explained the standard procedure that has been adopted by
experimental physics in this manner:

"How would we look for a new law? In general we look for a new law by the
following process. First, we guess it. (laughter) Then we... Don't 
laugh. That's
the damned truth. Then we compute the consequences of the guess... to 
see if
this is right, to see if this law we guessed is right, to see what it 
would
imply. And then we compare those computation results to nature. Or we 
say to

compare it to experiment, or to experience. Compare it directly with
observations to see if it works.

"If it disagrees with experiment, it's wrong. In that simple statement 
is the
key to science. It doesn't make a difference how beautiful your guess 
is. It
doesn't make a difference how smart you are, who made the guess or 
what his name
is... (laughter) If it disagrees with experiment, it's wrong. That's 
all there

is to it."

-- http://www.youtube.com/watch?v=F5Cwbt6RY

In physics, the model comes first, not afterwards, and that small 
difference

underlies the whole of the success that physics has had in explaining the
mechanics of the world that surrounds us.



I agree with much of what you cited and in large parts also with David 
Anderson's crusade against hypothesis testing and for multi-model 
inference (although it isn't exactly a new topic). However, I'm really 
tired of hearing about the physics envy cultivated among so many 
ecologist. Especially the last paragraph expresses this whole notion 
well: if only ecologists had used such and such an approach, as 
physicists did, we would by now have the same set of conclusive and 
stringent laws and would be able to successfully construct ecosystems 
from scratch. In reality, ecology has had loads of rigorous scientists, 
bright minds and multi-model inference but the signal to noise ratio in 
our system is completely different from the systems explored in physics. 
If you were to be a good scientist as Feynman suggest and come up with 
detailed theories/laws in ecology, build models based on them, make 
predictions and try to validate them on data from the real world, you 
would always have to reject them because you can always find an 
ecological system that will violate your predictions. I still believe 
that this would be the right way to progress in ecology but I think it 
is folly to expect the same "clean" results as in physics. A good point 
in case is the unified neutral theory of biodiversity. Hubbell came up 
with a theory, built a mathematical machinery according to this theory 
and validated his predictions on empirical data. Then people tried to 
apply his theory and predictions to other systems and soon failures to 
explain an acceptable level of variation in certain systems became 
apparent. According to Feynman then the theory is "wrong and that's all 
there is to it". I, in contrast, believe that we have to take into 
consideration the low signal to noise ratio in our systems and the 
staggering number of more or less equally important factors that govern 
them, plus the multitude of feedback loops and time lags before passing 
such harsh judgments about ecology. AND I don't believe that switching 
from hypotheses tests to multi-model inference will get us to a set of 
conclusive and stringent laws as they exist in physics any time soon. 
But I do believe that the described way is the right path to advance 
ecology.


Volker


--

--
Volker Bahn
Department of Biology
McGill University
Stewart Biol. Bldg. W3/5
1205 ave Docteur Penfield
Montreal, QC, H3A 1B1
Canada
t: (514) 398-6428
f: (514) 398-5069
[EMAIL PROTECTED]
www.volkerbahn.com

Lat-Long:   
45.50285, -73.5814
--


Re: [ECOLOG-L] Ecology and Theory and Evidence and Limitations and so on Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread Wayne Tyson
That's why I think context, especially with a character like Feynman, 
might be crucial.  Damned near everything he said was with a wink, 
and I suspect he might not have been above making bold misstatements 
to lure great minds out of hiding.  And he could have been just plain 
"wrong."  However, I think the crucial lesson here is that statements 
can, even while "wrong," contain, or lead to, truths.  The 
presumption of correctness is poisonous to the fertile mind; so is 
the terror of being wrong.


My contact with Feynman has been zero, except when his brilliance 
shone with penetrating energy, yea, even through the Toob.  And his 
books, of course.  I did attend a talk in the Arcadia CA library when 
Ralph Leighton's book, "Tuva or Bust" came out.  It was clear that 
Feynman's life force (determined joy?) had stimulated Leighton to 
explore his own inner self.  Who could forget the ironic challenge he 
laid before ass-covering bureaucrats and politicians with a simple 
glass of ice-water and an o-ring?  That one act alone illuminated the 
institutional rigidity (hence, again ironically, the brittleness) of 
one of the "greatest" "scientific" institutions in the world.  Anyone 
should have been able to see, with crystal clarity, that the 
government was being run by self-serving bozos who could blithely 
override technical competence (none dare call it treason?).  Sadder 
yet, no engineer, no scientist, no manager, no flunky in the whole 
in-the-loop crowd, would risk his job in defiance of stupidity.  The 
failure of a robo-nation to rise up in riot must have weighed heavily 
on Feynman.


Even if Feynman had been a dummy, he would have been a personality of 
great magnitude.


WT

At 08:13 AM 2/21/2008, William Silvert wrote:
It might be worth adding that Einstein probably would also have 
disagreed with Feynman on this point. The original test of general 
relativity proved it false. Einsten didn't give up, and is even 
alledged to have faked some calculations to support his view, and 
eventually a flaw was found in the experiment and subsequent work 
was consistent with the theory. Hey, if you have a good theory you 
don't give it up without a fight.


I might add that my only personal contact with Feynman was at a 
meeting of the American Physical Society where he presented his 
black hole theory of the nucleus. It was wrong.


Bill Silvert


- Original Message - From: "Wayne Tyson" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, February 21, 2008 5:58 AM
Subject: [ECOLOG-L] Ecology and Theory and Evidence and Limitations 
and so on Re: [ECOLOG-L] Anderson's new book,



There's one distinction that might need to be made, maybe 
not.  When Feynman said, "If it disagrees with [the?] experiment, 
it's wrong. In that simple statement is the
key to science. It doesn't make a difference how beautiful your 
guess is. It doesn't make a difference how smart you are, who made 
the guess or what his name
is... (laughter) If it disagrees with [the?] experiment, it's 
wrong. That's all there is to it." I hope everyone who reads this 
list understands that Feynman means that the guess is wrong if the 
experiment demonstrates otherwise (not the experiment), or that if 
I am mistaken in this presumption that I will be corrected.  I 
suspect that a transcript of Feynman's lecture, especially a 
fragment thereof, could be misinterpreted in the absence of the 
context of the actual lecture, even Feynman's way of expressing himself.


Re: [ECOLOG-L] Ecology and Theory and Evidence and Limitations and so on Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread Gary Grossman
It would be interesting to have this discussion after reading Don Strong's
1980 seminal paper "Null Hypotheses in Ecology" Synthese 43:271-285 .
Although I use AIC in my own research (see Grossman et al. 2006 Ecol
Monog.76:217), IMO Anderson, Johnson and others have thrown out the baby
with the bath water when they state that null hypotheses are trivial in
ecology.  In fact, the whole neutral model approach in ecology really is
based on null hypotheses, and it has been one of the most productive areas
in ecology since the 80's (see the great book by Gotelli and Green, Null
Models in Ecology).  Prior to those conceptual advances we had "models"  (
i.e. the competitionist model) and many investigators worked hard to twist
their data to fit the "model" (really it could be argued that the
development of neutral models were a paradigm shift in the Kuhnian sense).
Frankly, frequentist, information-theoretic, and Bayesian approaches all
have their place in ecology and we should just get over, trashing
frequentist approaches.  To twist a phrase "Statistics don't misuse data,
People misuse data" .  Frankly, to suggest that information-theoretic
approaches are less arbitrary because they don't use cut-off values is
inappropriate, because cut off values are used for the interpretation of wi
values and DeltaAIC values.  Nonetheless, weight of evidence approaches are
fantastic tools for ecology, but they are not the end-all and be-all for our
field.  There have been several back and forth exchanges in the literature
over the last 5-6 years regarding these points so I won't belabor them here.


cheers,

-- 
Gary D. Grossman


Distinguished Research Professor - Animal Ecology
Warnell School of Forestry & Natural Resources
University of Georgia
Athens, GA, USA 30602

http://www.arches.uga.edu/~grossman

Board of Editors - Animal Biodiversity and Conservation
Editorial Board - Freshwater Biology
Editorial Board - Ecology Freshwater Fish


Re: [ECOLOG-L] Ecology and Theory and Evidence and Limitations and so on Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread William Silvert
It might be worth adding that Einstein probably would also have disagreed 
with Feynman on this point. The original test of general relativity proved 
it false. Einsten didn't give up, and is even alledged to have faked some 
calculations to support his view, and eventually a flaw was found in the 
experiment and subsequent work was consistent with the theory. Hey, if you 
have a good theory you don't give it up without a fight.


I might add that my only personal contact with Feynman was at a meeting of 
the American Physical Society where he presented his black hole theory of 
the nucleus. It was wrong.


Bill Silvert


- Original Message - 
From: "Wayne Tyson" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, February 21, 2008 5:58 AM
Subject: [ECOLOG-L] Ecology and Theory and Evidence and Limitations and so 
on Re: [ECOLOG-L] Anderson's new book,



There's one distinction that might need to be made, maybe not.  When 
Feynman said, "If it disagrees with [the?] experiment, it's wrong. In that 
simple statement is the
key to science. It doesn't make a difference how beautiful your guess is. 
It doesn't make a difference how smart you are, who made the guess or what 
his name
is... (laughter) If it disagrees with [the?] experiment, it's wrong. 
That's all there is to it." I hope everyone who reads this list 
understands that Feynman means that the guess is wrong if the experiment 
demonstrates otherwise (not the experiment), or that if I am mistaken in 
this presumption that I will be corrected.  I suspect that a transcript of 
Feynman's lecture, especially a fragment thereof, could be misinterpreted 
in the absence of the context of the actual lecture, even Feynman's way of 
expressing himself. 


Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread William Silvert
This has been an intersting discussion, especially for me since I have spent 
half my career as a theoretical physics and half in marine ecology, and I am 
a great admirer of Richard Feynman. I think that the key to all this is that 
science involves trying to find explanatory patterns in nature, which 
involves either looking at existing data or looking for new data. Much 
science involves just looking around, such as the amazing work that was done 
by simply exploring abysses in the ocean and more recently investigating the 
fauna under the antarctic ice. Sequencing genomes and number crunching are 
explorations of this kind.


Most science is a bit mixed though. High energy physicists are building huge 
accelerators in hopes of finding the Higgs boson (an hypothetical particle) 
but they are also on the lookout for unexpected results.


The role of statistics in physics ir relatively minor, it is simply used to 
see whether the patterns we see seem real. It is analogous to analysing the 
well-known psychology experiment where you see two lines (<---> and >---<) 
and one looks longer than the other, but one can use a tool - a measuring 
stick - to see that in fact they are the same length. Despite my long study 
of physics, as both an undergraduate physics major and a PhD student, I was 
never asked to take a statistics course, although some statistics was 
covered in my freshman laboratory work (ironically that is where I learned 
about propagation of error, something that few ecologists seem to know). In 
fields where statistics are relevant, such as high-energy physics involving 
the analysis of millions of particle tracks, most physicists develop their 
own statistical concepts.


There is one point where I disagree with the quotations from Feynman, "If it 
disagrees with experiment, it's wrong." It's wrong if the experiment is 
right. In many cases I have found that the experimental data are wrong (and 
my criterion for wrong is that after discussing the experiment the 
experimentalists agree that their data are wrong, which usually means 
misinterpreted). This is more of a problem in ecology than in physics 
because theory and experiment are closer in physics, and experimentalists 
thus pay careful attention to identifying the underlying assumptions and 
problems of interpretation of their data. All experiments after all are 
based on models, and it is hard to do a good experiment if you don't 
understand the theory behind what you are doing.


Since Feynman's name has been raised, I will recall an incident that 
occurred on this list several years ago. I referred to Feynman's excellent 
book, "Surely you are joking Mr. Feynman" and mentioned an experiment he did 
with ants in his kitchen. An angry response followed with a complaint that 
he knew nothing about ant behaviour and was totally unqualified to carry out 
such experiments. Draw your own conclusions, and stay out of the kitchen.


Bill Silvert


- Original Message - 
From: "Jeff Houlahan" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, February 20, 2008 8:46 PM
Subject: Re: [ECOLOG-L] Anderson's new book,


Hi Wirt, I completely agree with almost all of what you (and David) wrote. 
Feynman is talking about a real hypothesis that arose from a great deal of 
thought and creativity...not one that has been attached with baling wire, 
duct tape and a little leftover Juicy Fruit to a pile of data that 
happened to be sitting around.

That said, science is many things - 'a predictive
enterprise, not some form of mindless after-the-fact exercise in number 
crunching.' - fits under the umbrella but I don't think captures the whole 
enterprise.  Sequencing the human genome was, in my opinion, a version of 
mindless number crunching (although perhaps somebody can put that effort 
in a hypothesis testing context that I haven't thought of).  I think most 
people would be hard pressed to say it wasn't science.  In fact, there is 
an emerging field of statistics (data mining) that seems to be useful in 
developing scientific hypotheses and is all about the 'mindless 
after-the-fact exercise in number crunching'.  My feeling is that data can 
provide hypotheses or test them.  When it does the first, it is a very 
useful part of science but it is not predictive and it does not test 
hypotheses (null, competing or otherwise).  When it does the latter it 
falls ito the category that Feynman was describing.
I think the reason we often get these trivial tests of hypotheses is 
because there is this sense that science is only about testing 
hypotheses - therefore to do science I must test a hypothesis...whether 
there is a meaningful one or not.  In my opinion, science can also just be 
about looking for patterns that we can use to suggest hypotheses. 
Hypotheses have to be tested to be useful but the patterns we see in 
nature (and those patter

[ECOLOG-L] Ecology and Theory and Evidence and Limitations and so on Re: [ECOLOG-L] Anderson's new book,

2008-02-21 Thread Wayne Tyson
atistics (data mining) that 
seems to be useful in developing scientific 
hypotheses and is all about the 'mindless 
after-the-fact exercise in number 
crunching'.  My feeling is that data can provide 
hypotheses or test them.  When it does the 
first, it is a very useful part of science but 
it is not predictive and it does not test 
hypotheses (null, competing or otherwise).  When 
it does the latter it falls ito the category that Feynman was describing.
I think the reason we often get these trivial 
tests of hypotheses is because there is this 
sense that science is only about testing 
hypotheses - therefore to do science I must test 
a hypothesis...whether there is a meaningful one 
or not.  In my opinion, science can also just be 
about looking for patterns that we can use to 
suggest hypotheses.  Hypotheses have to be 
tested to be useful but the patterns we see in 
nature (and those patterns are often less 
distinct without number crunching)are almost 
always the birthplace of hypotheses. Best.


Jeff H

-Original Message-
From: Wirt Atmar <[EMAIL PROTECTED]>
To: ECOLOG-L@LISTSERV.UMD.EDU
Date: Wed, 20 Feb 2008 12:03:54 -0700
Subject: [ECOLOG-L] Anderson's new book, "Model 
Based Inference in the Life Sciences"


I just purchased David Anderson's new book, "Model Based Inference in the Life
Sciences: a primer on evidence," and although I've only had the opportunity to
read just the first two chapters, I wanted to write and express my enthusiasm
for both the book and especially its first chapter.

David and Ken Burnham once bought me lunch, and 
because my loyalties are easily

purchased, I may be somewhat biased in my approach towards the book, but David
writes something very important in the first chapter that I have been mildly
railing against for sometime now too: the 
uncritical overuse of null hypotheses
in ecology. Indeed, I believe this to be such an 
important topic that I wish he

had extended the section for several more pages.

What he does write is this, in part:

"It is important to realize that null hypothesis testing was *not* what
Chamberlin wanted or advocated. We so often 
conclude, essentially, 'We rejected
the null hypothesis that was uninteresting or 
implausible in the first place, P

< 0.05.' Chamberlin wanted an *array* of *plausible* hypotheses derived and
subjected to careful evaluation. We often fail to fault the trivial null
hypotheses so often published in scientific journals. In most cases, the null
hypothesis is hardly plausible and this makes the study vacuous from the
outset...

"C.R. Rao (2004), the famous Indian 
statistician, recently said it well, '...in
current practice of testing a null hypothesis, 
we are asking the wrong question

and getting a confusing answer'" (2008, pp. 11-12).

This is so completely different than the extraordinarily successful approach
that has been adopted by physics.

In ecology, an experiment is most normally designed so its results may be
statistically tested against a null hypothesis. 
In this procedure, data analysis

is primarily a posteriori process, but this is an intrinsically weak test
philosophically. In the end, you rarely understand more about the processes in
force than you did before you began. But the 
analyses characteristic of physics

don’t work that way.

In 1964, Richard Feynman, in a lecture to students at Cornell that's available
on YouTube, explained the standard procedure that has been adopted by
experimental physics in this manner:

"How would we look for a new law? In general we look for a new law by the
following process. First, we guess it. 
(laughter) Then we... Don't laugh. That's

the damned truth. Then we compute the consequences of the guess... to see if
this is right, to see if this law we guessed is right, to see what it would
imply. And then we compare those computation results to nature. Or we say to
compare it to experiment, or to experience. Compare it directly with
observations to see if it works.

"If it disagrees with experiment, it's wrong. In that simple statement is the
key to science. It doesn't make a difference how beautiful your guess is. It
doesn't make a difference how smart you are, who 
made the guess or what his name

is... (laughter) If it disagrees with experiment, it's wrong. That's all there
is to it."

-- http://www.youtube.com/watch?v=ozF5Cwbt6RY

In physics, the model comes first, not afterwards, and that small difference
underlies the whole of the success that physics has had in explaining the
mechanics of the world that surrounds us.

The entire array of plausible hypotheses that 
were advocated by Chamberlin don't
all have to present during the first 
experimental attempt at verification of the

first hypothesis; they can occur sequentially over a period of years.

As David continues, "We must encourage and 
re

Re: [ECOLOG-L] Anderson's new book, "Model Based Inference in the Life Sciences"

2008-02-20 Thread Matheus Carvalho
I recently read a similar thing in the book "Data
Analysis and Graphs Using R" from Mainload & Braun. I
will reproduce it here. In fact, it is already a
quotation from Tukey, J. W. (1991). The philosophy of
multiple comparisons. Statistical Science 6:100-116.

"Statisticians classically asked the wrong question -
and were willing o answer with a lie, one that was
often a downright lie. They asked 'Are the effects of
A and B different?' and they were willing to say 'no'.

All we know about the world teaches us that the
effects of A and B are always different - in some
decimal place - for every A and B. Thus, asking 'Are
the effects different?' is foolish. What we should be
answering first is 'Can we tell the direction in which
the effects of A differ from the effects of B?' In
other words, can we be confident about the direction
from A to B? Is it 'up', 'down', or 'uncertain'?

Latter, in the words of the book author:

"Turkey argues that we should never conclude that we
'accept the null hypothesis'.


--- Wirt Atmar <[EMAIL PROTECTED]> escreveu:

> I just purchased David Anderson's new book, "Model
> Based Inference in the Life
> Sciences: a primer on evidence," and although I've
> only had the opportunity to
> read just the first two chapters, I wanted to write
> and express my enthusiasm
> for both the book and especially its first chapter.
> 
> David and Ken Burnham once bought me lunch, and
> because my loyalties are easily
> purchased, I may be somewhat biased in my approach
> towards the book, but David
> writes something very important in the first chapter
> that I have been mildly
> railing against for sometime now too: the uncritical
> overuse of null hypotheses
> in ecology. Indeed, I believe this to be such an
> important topic that I wish he
> had extended the section for several more pages.
> 
> What he does write is this, in part:
> 
> "It is important to realize that null hypothesis
> testing was *not* what
> Chamberlin wanted or advocated. We so often
> conclude, essentially, 'We rejected
> the null hypothesis that was uninteresting or
> implausible in the first place, P
> < 0.05.' Chamberlin wanted an *array* of *plausible*
> hypotheses derived and
> subjected to careful evaluation. We often fail to
> fault the trivial null
> hypotheses so often published in scientific
> journals. In most cases, the null
> hypothesis is hardly plausible and this makes the
> study vacuous from the
> outset...
> 
> "C.R. Rao (2004), the famous Indian statistician,
> recently said it well, '...in
> current practice of testing a null hypothesis, we
> are asking the wrong question
> and getting a confusing answer'" (2008, pp. 11-12).
> 
> This is so completely different than the
> extraordinarily successful approach
> that has been adopted by physics.
> 
> In ecology, an experiment is most normally designed
> so its results may be
> statistically tested against a null hypothesis. In
> this procedure, data analysis
> is primarily a posteriori process, but this is an
> intrinsically weak test
> philosophically. In the end, you rarely understand
> more about the processes in
> force than you did before you began. But the
> analyses characteristic of physics
> don’t work that way.
> 
> In 1964, Richard Feynman, in a lecture to students
> at Cornell that's available
> on YouTube, explained the standard procedure that
> has been adopted by
> experimental physics in this manner:
> 
> "How would we look for a new law? In general we look
> for a new law by the
> following process. First, we guess it. (laughter)
> Then we... Don't laugh. That's
> the damned truth. Then we compute the consequences
> of the guess... to see if
> this is right, to see if this law we guessed is
> right, to see what it would
> imply. And then we compare those computation results
> to nature. Or we say to
> compare it to experiment, or to experience. Compare
> it directly with
> observations to see if it works.
> 
> "If it disagrees with experiment, it's wrong. In
> that simple statement is the
> key to science. It doesn't make a difference how
> beautiful your guess is. It
> doesn't make a difference how smart you are, who
> made the guess or what his name
> is... (laughter) If it disagrees with experiment,
> it's wrong. That's all there
> is to it."
> 
> -- http://www.youtube.com/watch?v=ozF5Cwbt6RY
> 
> In physics, the model comes first, not afterwards,
> and that small difference
> underlies the whole of the success that physics has
> had in explaining the
> mechanics of the world that surrounds us.
> 
> The entire array of plausible hypotheses that were
> advocated by Chamberlin don't
> all have to present during the first experimental
> attempt at verification of the
> first hypothesis; they can occur sequentially over a
> period of years.
> 
> As David continues, "We must encourage and reward
> hard thinking. There must be a
> premium on thinking, innovation, synthesis and
> creativity" (p. 12), and this
> hard thinking must be done 

Re: [ECOLOG-L] Anderson's new book,

2008-02-20 Thread Jeff Houlahan
Hi Wirt, I completely agree with almost all of what you (and David) wrote.  
Feynman is talking about a real hypothesis that arose from a great deal of 
thought and creativity...not one that has been attached with baling wire, duct 
tape and a little leftover Juicy Fruit to a pile of data that happened to be 
sitting around.  
That said, science is many things - 'a predictive
enterprise, not some form of mindless after-the-fact exercise in number 
crunching.' - fits under the umbrella but I don't think captures the whole 
enterprise.  Sequencing the human genome was, in my opinion, a version of 
mindless number crunching (although perhaps somebody can put that effort in a 
hypothesis testing context that I haven't thought of).  I think most people 
would be hard pressed to say it wasn't science.  In fact, there is an emerging 
field of statistics (data mining) that seems to be useful in developing 
scientific hypotheses and is all about the 'mindless after-the-fact exercise in 
number crunching'.  My feeling is that data can provide hypotheses or test 
them.  When it does the first, it is a very useful part of science but it is 
not predictive and it does not test hypotheses (null, competing or otherwise).  
When it does the latter it falls ito the category that Feynman was describing.  
I think the reason we often get these trivial tests of hypotheses is because 
there is this sense that science is only about testing hypotheses - therefore 
to do science I must test a hypothesis...whether there is a meaningful one or 
not.  In my opinion, science can also just be about looking for patterns that 
we can use to suggest hypotheses.  Hypotheses have to be tested to be useful 
but the patterns we see in nature (and those patterns are often less distinct 
without number crunching)are almost always the birthplace of hypotheses. Best.

Jeff H

-Original Message-
From: Wirt Atmar <[EMAIL PROTECTED]>
To: ECOLOG-L@LISTSERV.UMD.EDU
Date: Wed, 20 Feb 2008 12:03:54 -0700
Subject: [ECOLOG-L] Anderson's new book, "Model Based Inference in the Life 
Sciences"

I just purchased David Anderson's new book, "Model Based Inference in the Life
Sciences: a primer on evidence," and although I've only had the opportunity to
read just the first two chapters, I wanted to write and express my enthusiasm
for both the book and especially its first chapter.

David and Ken Burnham once bought me lunch, and because my loyalties are easily
purchased, I may be somewhat biased in my approach towards the book, but David
writes something very important in the first chapter that I have been mildly
railing against for sometime now too: the uncritical overuse of null hypotheses
in ecology. Indeed, I believe this to be such an important topic that I wish he
had extended the section for several more pages.

What he does write is this, in part:

"It is important to realize that null hypothesis testing was *not* what
Chamberlin wanted or advocated. We so often conclude, essentially, 'We rejected
the null hypothesis that was uninteresting or implausible in the first place, P
< 0.05.' Chamberlin wanted an *array* of *plausible* hypotheses derived and
subjected to careful evaluation. We often fail to fault the trivial null
hypotheses so often published in scientific journals. In most cases, the null
hypothesis is hardly plausible and this makes the study vacuous from the
outset...

"C.R. Rao (2004), the famous Indian statistician, recently said it well, '...in
current practice of testing a null hypothesis, we are asking the wrong question
and getting a confusing answer'" (2008, pp. 11-12).

This is so completely different than the extraordinarily successful approach
that has been adopted by physics.

In ecology, an experiment is most normally designed so its results may be
statistically tested against a null hypothesis. In this procedure, data analysis
is primarily a posteriori process, but this is an intrinsically weak test
philosophically. In the end, you rarely understand more about the processes in
force than you did before you began. But the analyses characteristic of physics
don’t work that way.

In 1964, Richard Feynman, in a lecture to students at Cornell that's available
on YouTube, explained the standard procedure that has been adopted by
experimental physics in this manner:

"How would we look for a new law? In general we look for a new law by the
following process. First, we guess it. (laughter) Then we... Don't laugh. That's
the damned truth. Then we compute the consequences of the guess... to see if
this is right, to see if this law we guessed is right, to see what it would
imply. And then we compare those computation results to nature. Or we say to
compare it to experiment, or to experience. Compare it directly with
observations to see if it works.

"If it disagrees with experiment, it'

[ECOLOG-L] Anderson's new book, "Model Based Inference in the Life Sciences"

2008-02-20 Thread Wirt Atmar
I just purchased David Anderson's new book, "Model Based Inference in the Life
Sciences: a primer on evidence," and although I've only had the opportunity to
read just the first two chapters, I wanted to write and express my enthusiasm
for both the book and especially its first chapter.

David and Ken Burnham once bought me lunch, and because my loyalties are easily
purchased, I may be somewhat biased in my approach towards the book, but David
writes something very important in the first chapter that I have been mildly
railing against for sometime now too: the uncritical overuse of null hypotheses
in ecology. Indeed, I believe this to be such an important topic that I wish he
had extended the section for several more pages.

What he does write is this, in part:

"It is important to realize that null hypothesis testing was *not* what
Chamberlin wanted or advocated. We so often conclude, essentially, 'We rejected
the null hypothesis that was uninteresting or implausible in the first place, P
< 0.05.' Chamberlin wanted an *array* of *plausible* hypotheses derived and
subjected to careful evaluation. We often fail to fault the trivial null
hypotheses so often published in scientific journals. In most cases, the null
hypothesis is hardly plausible and this makes the study vacuous from the
outset...

"C.R. Rao (2004), the famous Indian statistician, recently said it well, '...in
current practice of testing a null hypothesis, we are asking the wrong question
and getting a confusing answer'" (2008, pp. 11-12).

This is so completely different than the extraordinarily successful approach
that has been adopted by physics.

In ecology, an experiment is most normally designed so its results may be
statistically tested against a null hypothesis. In this procedure, data analysis
is primarily a posteriori process, but this is an intrinsically weak test
philosophically. In the end, you rarely understand more about the processes in
force than you did before you began. But the analyses characteristic of physics
don’t work that way.

In 1964, Richard Feynman, in a lecture to students at Cornell that's available
on YouTube, explained the standard procedure that has been adopted by
experimental physics in this manner:

"How would we look for a new law? In general we look for a new law by the
following process. First, we guess it. (laughter) Then we... Don't laugh. That's
the damned truth. Then we compute the consequences of the guess... to see if
this is right, to see if this law we guessed is right, to see what it would
imply. And then we compare those computation results to nature. Or we say to
compare it to experiment, or to experience. Compare it directly with
observations to see if it works.

"If it disagrees with experiment, it's wrong. In that simple statement is the
key to science. It doesn't make a difference how beautiful your guess is. It
doesn't make a difference how smart you are, who made the guess or what his name
is... (laughter) If it disagrees with experiment, it's wrong. That's all there
is to it."

-- http://www.youtube.com/watch?v=ozF5Cwbt6RY

In physics, the model comes first, not afterwards, and that small difference
underlies the whole of the success that physics has had in explaining the
mechanics of the world that surrounds us.

The entire array of plausible hypotheses that were advocated by Chamberlin don't
all have to present during the first experimental attempt at verification of the
first hypothesis; they can occur sequentially over a period of years.

As David continues, "We must encourage and reward hard thinking. There must be a
premium on thinking, innovation, synthesis and creativity" (p. 12), and this
hard thinking must be done in advance of the experiment. Science is a predictive
enterprise, not some form of mindless after-the-fact exercise in number
crunching.

Although expressed in a different format, David Anderson is saying the same
thing as Richard Feynman, and I very much congratulate him for it.

Wirt Atmar