Dennis Roberts writes:

> most books talk about inferential statistics ... particularly those 
> where you take a sample ... find some statistic ... estimate some error 
> term ... then build a CI or test some null hypothesis ...
> 
> error in these cases is always assumed to be based on taking AT LEAST a 
> simple random sample ... or SRS as some books like to say ...
> 
> but, we KNOW that most samples are drawn in a way that is WORSE than SRS 

> thus, essentially every CI ... is too narrow ... or, every test 
> statistic ... t or F or whatever ... has a p value that is too LOW  
> 
> what adjustment do we make for this basic problem?

Another thought provoking question from Penn State.

In the real world, most people assess the deviation from SRS in a
qualitative (non-quantitative) fashion. If the deviation is serious, then
you consider it as a more preliminary finding or one that is in greater need
of replication. If it is very serious, you totally disregard the findings
from the study. The folks in Evidence Based Medicine talk about levels of
evidence, and this is one of the things that they would use to select
whether a study represents a higher or lower level of evidence.

You probably do the same thing when you assess problems with non-response
bias, recall bias, and subjects who drop out in the middle of the study.
Typically you assess these in a qualitative fashion because it is so
difficult to quantify how much these will bias your findings.

You could argue that this represents the classic distinction between
sampling error and non-sampling error. The classic CI is almost always too
narrow, because it only accounts for some of the uncertainty in the model.
We are getting more sophisticated, but we still can't quantify many of the
additional sources of uncertainty.

By the way, if you take non-SRS sample and then randomly allocate these
patients to a treatment and control group, the CI appropriately accounts for
uncertainty within this population, but you have trouble extrapolating to
the population that you are more interested in. It's the classic internal
versus external validity argument.

I hope this makes sense and is helpful.

Steve Simon, [EMAIL PROTECTED], Standard Disclaimer.
STATS: STeve's Attempt to Teach Statistics. http://www.cmh.edu/stats
Watch for a change in servers. On or around June 2001, this page will
move to http://www.childrens-mercy.org/stats



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to