On Mon, 22 Apr 2013 11:03:12 -0700, Marc Carter wrote:
Hi, All --

A poll:
Am I being too picky about the use of the phrase, "highly significant" (or
something similar) when it's used to describe a very low-probability result? It sort of drives me crazy; all I can hear is my graduate math stats teacher threatening to kill us if we ever said something like that. I still read it in
papers and it's like fingernails on a chalkboard.

But perhaps I should just chill out?

What do you think?

I tend to agree with you but I'd like to make the following points:

(1) Some software packages, such as SPSS, truncate the p-value or
"significance" at three decimal values, so, it would be a mistake to
say that any result with "p= .001" or "p< .001" is highly significant
because it is likely that there are some results that have an even
smaller probability of occurring under a true null hypothesis.

(2) To reinforce point (1) above, a couple of decades ago I conducted
a "levels of processing" memory experiment in a statistics class in order
to provide the students with some real data to work with.  In this
experiment, students were presented 32 words via a slide projector
and half of the students received instructions to determine if the word
contained the letter "e" or not (they wrote down "yes" or "no" to each
word on a response sheet) while the other half received instructions to
determine whether the word referred to a man-made/manufactured
object (again, they wrote "yes" or "no"; stimulus conditions were
balanced to make yes/no response rates equal for both groups).
After a few minutes of distraction, students were told to recall as many
words as they could.  As a one-way two-level between-subjects
design was used, an independent groups t-test was conducted (with
equal variances) and a one-way ANOVA was conducted and
provided the following results from SPSS:

(a) t-test: t(29)= -5.97, p< .001, r^2= .55

(b) One-way ANOVA: F(1.29)= 35.59, p< .001, partial eta^2=.55

Note that the above follows APA style recommendations for reporting
statistical results which raises the question of why one would focus
on the p-value instead of the effect size measure, that is, the point-biserial
squared or eta-squared.  Over half of the variance in the dependent
variable (i.e., number of words recalled) is accounted for by the instruction
manipulation.  The key idea is to make sure that an effect size measure
is presented and correctly interpreted.

(3)  Excel, for all its short-comings, attempts to provide p-values for
the obtained values of test statistics.  Though Excel blew up in providing
the two-tailed p-value for the data used above, it did provide the
p-value in the regression analysis where the p-value for the coefficient
for LoP group membership was p= 1.74819473651439E-06, that is,
in scientific notation, or p= .000000174819473651439, if I remember
how to convert scientific notation back to ordinary numbers. So,
do this result qualify as "super-duper higher statistically significant"?

(4)  I agree with Karl W. that calling a test result "reliable" on the basis
of a p-value is very strange and I had not been taught that usage.  However,
I did come across its use among some mathematical psychologists which
made me wonder (a) why would they say such a thing, and (b) what was
I missing?

-Mike Palij
New York University
m...@nyu.edu




---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=25160
or send a blank email to 
leave-25160-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to