Re: [UAI] Has anyone else noticed how odd many frequentist techniques are?

2014-09-28 Thread Konrad Scheffler
Hi Rich,

If you are looking for a forum where these issues are frequently discussed,
I recommend Andrew Gelman's blog: http://andrewgelman.com

If you are looking for formal sources, there are the references cited in
Kevin's attachment (in addition to his book, of course). In particular, if
you are aiming to write something on the topic I recommend perusing the
book by Jaynes (and his papers more generally).

Regards,
Konrad


On Sat, Sep 27, 2014 at 12:44 PM, Richard E Neapolitan 
richard.neapoli...@northwestern.edu wrote:

  Thanks, Kevin,
 Well, I guess they are not too well-known. I asked my mentor on Bayesian
 stats, Sandy Zabell (prominant Bayesian statistician), about it. Although
 he agreed with me, he did not really have references stating how
 pathological these frequentists techniques are.

 I will tell Sandy about your book. He still teachs stats at NU.
 Best,
 Rich



 On 9/27/2014 1:08 PM, Kevin Murphy wrote:

 Yes, these problems are very well known. I am attaching a brief summary
 ( from my textbook http://www.cs.ubc.ca/%7Emurphyk/MLbook/index.html) of
 some of the most famous pathologies of frequentist statistics (cited
 references can be found in the bibliography here
 http://www.cs.ubc.ca/%7Emurphyk/MLbook/pml-bib.pdf). There are several
 more pathologies, but I didn't want to go overboard :)

  Kevin

  PS. A very nice practical book for teaching undergrad stats from a
 Bayesian POV is this:

  @book{Kruschke10,
  title = {{Doing Bayesian Data Analysis: A Tutorial Introduction with R and
 BUGS}},
  author = J. Kruschke,
  year = 2010,
  publisher = Academic Press
 }




 On Fri, Sep 26, 2014 at 1:59 PM, Richard E Neapolitan 
 richard.neapoli...@northwestern.edu wrote:

  Dear Colleagues,

 Since I converted to Bayesian statistics in the late 1980's, I have not
 looked at most frequentist methods. However, every time I look at them
 again, I notice how apparently preposterous many of them are.

 First that was the Bonferroni correction, which makes me update my belief
 about the results of an experiment based on how many other experiments I
 happen to conduct with it (and which of course implicitly assigns  a low
 prior probability). One researcher even told me once that he has students
 first conduct fewer experiments so a finding has a better chance of being
 significant. I just walked away scratching my head.

 Now, in the process of designing a small test for a student, I noticed
 that two-tailed hypothesis testing is completely unreasonable. Along with
 the one-tailed test, it gives me decision rules which enable me to reject
 the hypothesis that the mean is less than or equal to 0, but not reject the
 hypothesis that it equals 0. The explanation is wrapped up in a story about
 the question asked and long run behavior with other similar experiments,
 that are not even run. So two people can walk away from the same experiment
 with different updated beliefs about whether the mean is 0, not based on
 their prior beliefs, but based on the question they happened to ask. In
 general, hypothesis testing does not seem to be the way to go. We should
 simply compute confidence intervals or posterior probability intervals.

 The Bayesian's world is so much simpler. She updates her belief solely on
 her prior beliefs and the data. There is no story that leads to strange
 results.

 All this matters, especially in medical applications, because so many
 studies are deemed significant or not significant based on the enigmatic
 p-value and the Bonferroni correction. I like to say that in medicine
 for every study there is an equal and opposite study.

 I am writing this because I wonder who else has noticed these oddities? I
 never read about them. I simply observed them independently. I find it
 curious that they have persisted for so long, and more is not said about
 them.

 Best,
 Rich


   --
 Richard E. Neapolitan, Ph.D., Professor
 Division of Health and Biomedical Informatics
 Department of Preventive Medicine
 Northwestern University Feinberg School of Medicine
 750 N. Lake Shore Drive, 11th floor
 Chicago IL 60611


 ___
 uai mailing list
 uai@ENGR.ORST.EDU
 https://secure.engr.oregonstate.edu/mailman/listinfo/uai



 --
 Richard E. Neapolitan, Ph.D.
 Division of Biomedical Informatics
 Department of Preventive Medicine
 Northwestern Feinberg School of Medicine
 750 N. Lake Shore Drive, 11th Floor
 Chicago, Illinois 60611


 ___
 uai mailing list
 uai@ENGR.ORST.EDU
 https://secure.engr.oregonstate.edu/mailman/listinfo/uai






-- 

Konrad Scheffler
University of California, San Diego
http://id.ucsd.edu/faculty/KonradSchefflerPhD.shtml

___
uai mailing list
uai@ENGR.ORST.EDU

[UAI] Has anyone else noticed how odd many frequentist techniques are?

2014-09-27 Thread Richard E Neapolitan

Dear Colleagues,

Since I converted to Bayesian statistics in the late 1980's, I have not 
looked at most frequentist methods. However, every time I look at them 
again, I notice how apparently preposterous many of them are.


First that was the Bonferroni correction, which makes me update my 
belief about the results of an experiment based on how many other 
experiments I happen to conduct with it (and which of course implicitly 
assigns  a low prior probability). One researcher even told me once that 
he has students first conduct fewer experiments so a finding has a 
better chance of being significant. I just walked away scratching my head.


Now, in the process of designing a small test for a student, I noticed 
that two-tailed hypothesis testing is completely unreasonable. Along 
with the one-tailed test, it gives me decision rules which enable me to 
reject the hypothesis that the mean is less than or equal to 0, but not 
reject the hypothesis that it equals 0. The explanation is wrapped up in 
a story about the question asked and long run behavior with other 
similar experiments, that are not even run. So two people can walk away 
from the same experiment with different updated beliefs about whether 
the mean is 0, not based on their prior beliefs, but based on the 
question they happened to ask. In general, hypothesis testing does not 
seem to be the way to go. We should simply compute confidence intervals 
or posterior probability intervals.


The Bayesian's world is so much simpler. She updates her belief solely 
on her prior beliefs and the data. There is no story that leads to 
strange results.


All this matters, especially in medical applications, because so many 
studies are deemed significant or not significant based on the enigmatic 
p-value and the Bonferroni correction. I like to say that in medicine 
for every study there is an equal and opposite study.


I am writing this because I wonder who else has noticed these oddities? 
I never read about them. I simply observed them independently. I find it 
curious that they have persisted for so long, and more is not said about 
them.


Best,
Rich


--
Richard E. Neapolitan, Ph.D., Professor
Division of Health and Biomedical Informatics
Department of Preventive Medicine
Northwestern University Feinberg School of Medicine
750 N. Lake Shore Drive, 11th floor
Chicago IL 60611

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] Has anyone else noticed how odd many frequentist techniques are?

2014-09-27 Thread Richard E Neapolitan

Thanks, Kevin,
Well, I guess they are not too well-known. I asked my mentor on Bayesian 
stats, Sandy Zabell (prominant Bayesian statistician), about it. 
Although he agreed with me, he did not really have references stating 
how pathological these frequentists techniques are.


I will tell Sandy about your book. He still teachs stats at NU.
Best,
Rich


On 9/27/2014 1:08 PM, Kevin Murphy wrote:
Yes, these problems are very well known. I am attaching a brief 
summary ( from my textbook 
http://www.cs.ubc.ca/%7Emurphyk/MLbook/index.html) of some of the 
most famous pathologies of frequentist statistics (cited references 
can be found in the bibliography here 
http://www.cs.ubc.ca/%7Emurphyk/MLbook/pml-bib.pdf). There are 
several more pathologies, but I didn't want to go overboard :)


Kevin

PS. A very nice practical book for teaching undergrad stats from a 
Bayesian POV is this:


@book{Kruschke10,
 title = {{Doing Bayesian Data Analysis: A Tutorial Introduction with 
R and

BUGS}},
 author = J. Kruschke,
 year = 2010,
 publisher = Academic Press
}




On Fri, Sep 26, 2014 at 1:59 PM, Richard E Neapolitan 
richard.neapoli...@northwestern.edu 
mailto:richard.neapoli...@northwestern.edu wrote:


Dear Colleagues,

Since I converted to Bayesian statistics in the late 1980's, I
have not looked at most frequentist methods. However, every time I
look at them again, I notice how apparently preposterous many of
them are.

First that was the Bonferroni correction, which makes me update my
belief about the results of an experiment based on how many other
experiments I happen to conduct with it (and which of course
implicitly assigns  a low prior probability). One researcher even
told me once that he has students first conduct fewer experiments
so a finding has a better chance of being significant. I just
walked away scratching my head.

Now, in the process of designing a small test for a student, I
noticed that two-tailed hypothesis testing is completely
unreasonable. Along with the one-tailed test, it gives me decision
rules which enable me to reject the hypothesis that the mean is
less than or equal to 0, but not reject the hypothesis that it
equals 0. The explanation is wrapped up in a story about the
question asked and long run behavior with other similar
experiments, that are not even run. So two people can walk away
from the same experiment with different updated beliefs about
whether the mean is 0, not based on their prior beliefs, but based
on the question they happened to ask. In general, hypothesis
testing does not seem to be the way to go. We should simply
compute confidence intervals or posterior probability intervals.

The Bayesian's world is so much simpler. She updates her belief
solely on her prior beliefs and the data. There is no story that
leads to strange results.

All this matters, especially in medical applications, because so
many studies are deemed significant or not significant based on
the enigmatic p-value and the Bonferroni correction. I like to say
that in medicine for every study there is an equal and opposite study.

I am writing this because I wonder who else has noticed these
oddities? I never read about them. I simply observed them
independently. I find it curious that they have persisted for so
long, and more is not said about them.

Best,
Rich


-- 
Richard E. Neapolitan, Ph.D., Professor

Division of Health and Biomedical Informatics
Department of Preventive Medicine
Northwestern University Feinberg School of Medicine
750 N. Lake Shore Drive, 11th floor
Chicago IL 60611


___
uai mailing list
uai@ENGR.ORST.EDU mailto:uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai




--
Richard E. Neapolitan, Ph.D.
Division of Biomedical Informatics
Department of Preventive Medicine
Northwestern Feinberg School of Medicine
750 N. Lake Shore Drive, 11th Floor
Chicago, Illinois 60611

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai