fsimm...@pcc.edu wrote:
This is to illustrate a point that Warren has recorded on his website
somewhere (I don't remember exactly where); namely that lack of
summability is not insurmountable.

We start with the assumption that the voters have range style ballots
on a scale of zero to six.  [Seven levels are about optimal according
to the psychometrics experts.]

I thought five was, not seven. Do you have any papers?

At each precinct the ballots are sorted into n piles, one for each
candidate.  The ballots in each pile are averaged together to get a
rating vector for each candidate.  [At this first stage if a
candidate shares (with k-1 other candiates) top rating on a ballot,
then a copy of that ballot is sent to each of those candidate's
piles, along with a weight of 1/k .]

The precincts send the n candidate vectors, together with their
respective totalweights to the counting center.  For each candidate a
weighted average of the vectors for that candidate from all of the
precincts is computed, and the total weight is taken as the size of
that candidate's faction.

The STV computation is then based on these n almagamated factions.

That would fail the Droop proportionality criterion. Just take your favorite example where Range fails it, then stick an universal favorite candidate X in front of every voter's vote. Now, there's only one rating vector - X's - and the averaging will smooth out any structure beyond X.

This is an extreme example, but the averaging could hide detail in more realistic ballot sets, too.

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to