Abs,

There are definitely problems with the editorial, but I think "most 
mega-ultra-super-biased" is an overreaction. It appears that you have 
overlooked some of the points made there, and the fact that it does not pretend 
to be an exhaustive list of alternative methods. The editorial attempts to 
digest what is in 43 articles in that special issue. Some of those articles do 
promote Bayesian methods – not a surprise – and some advocate using P values 
but without ascribing magical properties to P < 0.05. My own emmeans package 
does present P values (sans stars, or emojis either) in a lot of contexts.

More to the point, the criticisms you offer have to do with later sections of 
the editorial – not the initial part, which is largely a repeat of an earlier 
ASA statement on interpretation of P values with the added recommendation that 
people should never say "statistically significant." It is that initial part 
that I think does describe a consensus of a large and growing proportion of 
statisticians and other scientists that placing undue emphasis on "statistical 
significance" is a bad thing. Emphasizing P values by adding stars encourages 
that kind of misdirected emphasis.

It seems fairly harmless to change the default for "show.signif.stars" to 
FALSE. However, I do recognize that no change to R's defaults should be taken 
lightly or done without careful consideration. I only ask that such careful 
consideration take place, and hope in fact that a plan can be made to phase-in 
such a change. 

Thanks,

Russ

Russell V. Lenth  -  Professor Emeritus
Department of Statistics and Actuarial Science   
The University of Iowa  -  Iowa City, IA 52242  USA   
Voice (319)335-0712 (Dept. office)  -  FAX (319)335-3017



From: Abs Spurdle <spurdl...@gmail.com> 
Sent: Thursday, March 28, 2019 12:19 AM
To: Lenth, Russell V <russell-le...@uiowa.edu>; r-devel <r-devel@r-project.org>
Subject: [External] re: [Rd] default for 'signif.stars'

I read through the editorial.
This is the one of the most mega-ultra-super-biased articles I've ever read.

e.g.
The authors encourage Baysian methods, and literally encourage subjective 
approaches.
However, there's only one reference to robust methods and one reference to 
nonparametric methods, both of which are labelled as purely exploratory 
methods, which I regard as extremely offensive.
And there don't appear to be any references to semiparameric methods, or 
machine learning.

Surprisingly, they encourage multiple testing, however, don't mention the 
multiple comparison problem.
Something I can't understand at all.

So, maybe we should replace signif.stars with emoji...?


______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to