Chris wrote
<<<
I've been trading e-mails with several of you and want you to know I
really appreciate your time and intelligence.  So far, the consensus
seems to be to do nothing as anything I can do is no better than what
I have.  But...
>>>>

You are welcome, this is the sort of thing these lists are good for
(IMHO  of course)

<<<
How can I (or you, for that matter), as a resonable person, ignore
what is glaring evidence of bias in my (or your) data?
>>>

The question isn't how you can ignore it, but what you can do about it.
 I don't see anything, frankly, and it appears that the consensus
agrees.  Any correction you make will only add to the confusion.  When
you write up your results, state the limitations, and the comparison
with other data.  Then readers have all the information.


<<<
Let me be a bit more specific about the problem.  We, and others like
us who study volunteering with limited budgets, tend to use contract
data collection firms (we used Westat) who have a less that stirling
record of data collection.  The real response rates hover in the low
30s, and errors abound.
>>>

This doesn't jibe with what I know about Westat, but it is something
for you to discuss with them.  Westat CERTAINLY has experts in making
the sorts of adjustments you need, when they are possible.  From what I
hear of Westat, and from the people whom I know who work there, I would
expect that their level of expertise is extremely high.  Of course, the
proper time to discuss this with them (or with us) would have been
before you started the project.  Such a discussion might have led to
better methods - or, it may have led to the conclusion that your budget
was inadequate.  I think that it probably would have been better to use
a smaller sample and a better method.(e.g. more callbacks, mailings
before a call, some incentive to answer, etc.)  These issues have been
studied a LOT and Westat are experts at the answers.

<<<
  On the other side of things is the Bureau of
Labor Statistics who uses the Ccensus Bureau to collect volunteering
data in a supplement to the Current Population Survey.  They have a
real response rate of at least 70%. 
>>>>

Then, why are you studying it, as well?  Do you doubt the census
numbers? Or do you have different questions? or what?

<<<
We use a randon sample (one adult per household)
>>>

>From what you've said in previous messages, you do not have a random
sample of households.  I also have issues with sampling one adult per
household.  How are you adjusting for this?  

<<<
Our sample size was 4,000 adults; there's was around 120,000 aged 16+.

>>>

4,000 is plenty of people, if the method is good.  4,000,000 is not
enough, fi the method is bad.  
<<<
Therefore, rather than guessing at how to adjust for this bias, and
rather than ignoring it, I'm looking for ideas on how to correct it.
>>>

I'm sorry, but I really don't see much you can do.

The whole study brings to mind two quotes (both, I believe, from George
Box, but I have also seen other attributions)

1.  

Hiring a statistician after your data has been collected is like hiring
a physician when the patient is in the morgue.  He may be able to tell
you what went wrong, but he is unlikely to be able to fix it.

2

An approximate solution to the right problem is much better than an
exact solution to the wrong problem.


Hope this helps, though I am afraid it won't.....

Peter

Peter L. Flom, PhD
Assistant Director, Statistics and Data Analysis Core
Center for Drug Use and HIV Research
National Development and Research Institutes
71 W. 23rd St
www.peterflom.com
New York, NY 10010
(212) 845-4485 (voice)
(917) 438-0894 (fax)


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to