In a message dated Sun, 14 Jan 2001  9:09:06 PM Eastern Standard Time, "David Lesley" 
<[EMAIL PROTECTED]> writes:

<< We should keep in mind that the TFN event rankings and AOY top 10 are done in very 
different ways. The event rankings are negotiated by a small group (maybe 6 these 
days) of real experts- several of whom live in Europe. I expect that their debates 
were similar to some that have surfaced on this list. The ranking articles always give 
reasons for the order (except for one year.) There is surely room for disagreement, 
but I usually agree with them.

<<The AOY top 10 is done by a vote, probably without campaigns,  of a  much larger 
list of people. I believe that the bulk of them are correspondents
and writers for the magazine. That sort of ranking cannot be justified
position by position because almost all the voters will have several serious
 disagreements of their own with the final ordering.>>

I'm not going to be at my computer today, and have gone no farther than this first 
post in the queue, although I see there is significant continuation of the threads of 
yesterday. Since it may be germane (do we need a dictionary for that one? :-) to the 
conversation, let me just take the time before I sign off to confirm the wisdom of 
David's post, which raises the danger of mixing the Rankings and the AOY voting.

The Rankings are done by a group of about a half-dozen people from several different 
countries (U.S. in a minority--so much for cries that we're U.S. biased). They have at 
their disposal charts detailing not only the seasonal records of the top 25 or so 
athletes in each event, but also to whom they lost in each competition. And 
calculation of the top-5 average marks for each (which is how we assign value to pure 
performance, rather than just a single mark).

These Rankers then trade arguments (much as goes on here, but with more concrete data) 
over a period of a couple of months, slowly forming the ranking order. There are no 
rash one-man decisions, nothing decided on the spur of the moment. Everything is 
considered, reconsidered, then reconsidered again. Is everybody completely happy at 
the end? Of course not! But the best consensus possible is reached (except for Ato 
this year).

The AOY choices, on the other hand, are done by a group of approximately 50 people 
very close to the sport from 20-odd nations. All have long ties to the sport, and have 
shown credentials for analyzing the sport that go beyond just their longevity.

They're provided with the seasonal records of each nominated athlete (usually about 
15, although write-ins are allowed), along with data on where they might be on the 
all-time list, what percent of the year's top performances they had, their probable 
World Ranking position(s),  etc., etc.

They then have the tough job of trying to sort out people competing in different 
disciplines. One of the longtime Rankers refuses to participate in this activity, 
saying that it's simply impossible to compare people across events.

People whose ballots consistently fly in the face of reason are removed from the panel.

The voting is also a limited democracy (you know, like Florida :-) in that just 
because you vote doesn't mean your ballot gets counted. This has caused people to 
defect in the past.

For those of you who think that the ballots that omitted Greene or Korzeniowski were 
somewhere out in space, I agree completely. And if they had made a difference in the 
voting we wouldn't have counted them. But we tallied the votes both ways, and the 
order didn't change.

Hope this gives you a bit more insight into the process. And will stop you from mixing 
apples and oranges when you air your annual laundry list of complaints.

gh

Reply via email to