I have problems with the statistical methods used to develop most of these tables. Even the comparison method I used, which is similar to Hoffman's is a bit too simplistic. However, the correct method is rather data intensive. I would want to use the top 100 marks over a series of years to estimate the underlying variance in performances. This would be the means of identifying which performances are the greatest "outlier" vs. other performances. The one underlying assumption is that the same proportion of the population competes in each event so that the probability distributions are comparable among events.

If we're going to rely solely on subjective comparisons, then Tobin's evaluation is no more valid than mine and he has absolutely no basis for leaping to a conclusion that running a near a WR in one event implies drug use. He's going to have to use a completely different basis for coming to that conclusion.

On the other hand, I'm not arguing that my comparison is subjective per se, but rather can be recreated by anyone else in a step by step fashion that is readily transparent. If they want to change the underlying assumptions, they are free to do so and to come to their own conclusions. Such transparency is the fundamental basis of "objective" comparisons. Subjective comparisons are opaque and cannot be recreated.

RMc

At 01:23 PM 10/14/2003 -0500, [EMAIL PROTECTED] wrote:
ALL of these "comparison tables" are fundamentally flawed, as subjectivity is the common denominator. Don't believe me, just compare the projected equivalents from the various tables: Purdy, Coe and Martin, Portuguese, Mercier (I'm missing a few)

Reply via email to