Steve's Webpage cites the URL of an article that has the opposite
opinion from mine. I would not mind seeing some comments from other
people. I will add some more, mostly at the end.
On 22 Jan 2001 08:49:16 -0800, [EMAIL PROTECTED] (Simon, Steve, PhD)
wrote:
> Apparently, this message did not get through to sci.stat.edu. So I am
> sending it again. My apologies to anyone who may have seen it twice.
>
> Rich Ulrich writes:
>
> >The Relative Risk for two groups is also familiar, Risk1/Risk2, but
> >it becomes intractable to useful statistics (and misleading, to boot)
> >when the Risks are not small.
Steve: >
> That's an interesting comment. Most people would argue the opposite: that
- I sure disagree with that "Most people." I think it should be,
"A few people who have never gotten use to the OR would argue..."
I was surprised, actually, to read the Deeks article, and see that a
real, live, active professional of today would write warmly about the
RR. Maybe I am not looking at any epi. literature?
Steve: >
> the odds ratio is misleading when the risks are not small. I believe there
> are limitations to both the odds ratio and the relative risk. I have
> documented my thoughts on the following web page.
>
> http://www.cmh.edu/stats/ask/oddsratio.htm
>
> Note in particular my comments relating the relative risk to the Car Talk
> puzzler about the hundred pound sack of potatoes.
>
> I'm curious what you think (both Rich Ulrich and other edstat-l readers)
> about the interpretability of the odds ratio and the relative risk. I'd also
> be interested in references about the intractability of the relative risk in
> complex modeling situations.
I believe that the RR is going to die out, because of the relative
intractability to modeling. The statistical courses use OR. The
computer programs use OR. I lately checked a nice epi-stat book, from
1982 -- it gives some formulas for Confidence Limits, etc., for the
RR, but the further explanations and Problems make use of OR.
The main virtue of the RR is its easy familiarity; it *only*
applicability is to a fixed population, with proportionate sampling.
That does make it useful as a measure of today's relevance, but it
does not make it useful in comparing studies, or for scientific
generalization.
For a pair of Risks like 25% versus 75%, the RR differs from the OR.
It is fair to say that the RR is more obvious -- but the OR, which
tends to overstate (if you don't recognize the convention) does not
*miss* the effect. For a pair of risks like 10% versus 25%, there is
not much difference between the two. When you reverse the labels,
that latter pair becomes 90% versus 75% in terms of the Risks -- and
it looks like "almost no increase" -- so the RR is thoroughly
misleading, and misses the effect; whereas the OR remains unchanged.
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================