At 12:46 PM 12/30/2008, Kristofer Munsterhjelm wrote:
Abd ul-Rahman Lomax wrote:
At 05:48 AM 12/28/2008, Kristofer Munsterhjelm wrote:
Abd ul-Rahman Lomax wrote:

That makes the entire cycle, including polls and feedback, into one election system. Method is too narrow, because the system isn't just "input, then function, then output"; it doesn't just translate individual preferences into social preferences.

"Election systems" in the real world are extraordinarily complex. "Voting systems" are methods for taking a ballot and generating a result; sometimes this is a fixed and final result, sometimes it is feedback for subsequent process, which may include a complete repetition, repetition with some restrictions, or even a coin toss.

Individual preferences do not exist in a vacuum, there is inherent and massive feedback in real societies. The idea that there are these isolated voters who don't talk to each other and don't influence each other by their positions is ... ivory tower, useful for examining certain theoretical characteristics of systems, but not for predicting the function of systems in the real world. It can be useful, sometimes, but we must remember the limits on that utility as well.

Thats all nice, but with the versatility of that wider definition, you get the chance of problems that accompany systems that include feedback within the system itself. Such can include cycling, and too much of either stability (reaches a "compromise" that wasn't really a good compromise) or instability (doesn't settle, as with cycling, or reaches a near-random result depending on the initial state of the system).

We are now considering as relevant "cycling" within the entire electorate, within the process by which a whole society comes to an election with the set of preferences and preference strengths that they have. Human societies have been dealing with this for a long, long time, and the best answers we have so far are incorporated in traditional deliberative process, which insures that every point of view of significance is heard, that possible compromises are explored, and that there is an overall agreement that it is time to make a decision, before the decision is actually made. And then the decision is generally made by or with the explicit consent of a majority of those voting, with the implicit consent of those not voting (but able to vote).

In short, there's a wider range of possible outcomes because the system permits many more configurations than a simple one-shot election method. This is good when it leads to a better result from voters optimizing their votes in a way that reaches the true compromise, but it's bad when factions use that increased range to try to game the system. If Range voters (for instance) need to consult polls or the prevailing atmosphere to gain knowledge of how to express their votes, then that too is something the strategists can manipulate.

Range voters don't need to consult polls! They can do quite well, approaching the most strategic possible vote, without them, voting purely based on their opinions of the candidates and some common sense.

Those who strategize, who do something stronger than this, are taking risks. All the groups will include people who strategize....

When one uses strategy to construct a wider mechanism on top of a single election method by adding a feedback system (such as one may say is done by Range if it's used honestly), then that's good; but if one uses strategy to pull the method on which the system is based in a direction that benefits one's own preferences at the expense of others, giving oneself additional power, then that is bad. Even worse is if many factions do so and the system degrades further because it can't stabilize or because the noise swamps it; or if the combined strategizing leads to a result that's worse for all (chicken-race dynamics)

The "pulling" of a group toward its preferred result is, however, what we ask voters to do! Tell us what you want, and indicate by your votes how strongly you want it! Want A or you are going to revolt? You can say that, perhaps, though we are only going to give you one full vote to do it with. Want to pretend that you will revolt? -- or merely your situation is such that A is so much better than the others that you don't want to dilute the vote for A against anyone by giving them your vote? Fine. That's your choice. It helps the system make its decision.

Be aware that if the result is not going to be A, you have abstained from the result. If there is majority failure, you may still be able to choose between others. (IRV *enforces* this, you don't get to cast a further vote unless your candidate is eliminated.)

*Truncation will be normal*. And, in fact, represents a reasonably sincere vote for most voters! (In most common elections under common conditions). Why are these "strategic voters" different.

I realized the error quite some time back in connection with Approval voters. The votes are "strategic" because the voters supposedly "also approve" of the candidates, but for "strategic" reasons, don't also vote for this supposedly also-approved candidate. But "approval" isn't an absolute. There is no absolute "approval cutoff." What we approve depends on what we think we can get!

Saari and some other voting theorists strongly dislike the indeterminacy of this? How are we supposed to do our nice neat analyses of how voters will vote, based on our suppositions regarding their preferences, if they will shift their votes depending on how they perceive each other, as well as the candidates.

But isn't this the decision that voters are *really* making? What is the best outcome for *this* electorate? Paradoxically, it is alleged that voters will elect mediocre candidates. Why? Because, supposedly, they will vote for any candidate who is above, even if just barely, the average. It's preposterous! We don't and won't vote that way! I.e., if voters vote "sincerely," as these analysts imagine (together with imagining what the "sincere vote" is), of course they will get a mediocre outcome (often)! But that voters *don't* just rubber-stamp candidates, *won't* add additional preferences or approvals unless they are willing to support the election of these additional candidates, causes the method to work. *Strategic voting is necessary!*

Arrow and others saw the problem with utilities and cardinal ratings as being that there were no absolutes, no single way to translate a set of utilities into a voting pattern. It turns out this that wasn't a *problem,* it was necessary for the voters to have this freedom, to use probability information in addition to raw, isolated, what-if utilities. And this is how we individually make decisions in our lives. We do not just go for the best possible imaginable option, we moderate that and go for what we think we can get. Good thing, too!

But there is nothing wrong with aiming a little high. And when too many voters aim too high, we get majority failure, which means that the voters need to reconsider a bit. Ideally, the approval cutoffs slide down a little. That's what happens with deliberative process; we can approach this with election methods, and I think that it's possible to get quite close with two rounds, provided the rules are right. What I've come up with so far is using an advanced method for the primary, and probably for the runoff as well, because I also want to see write-in votes being possible in the runoff, in addition to at least two candidates on the runoff ballot. We actually have this.... without the advanced methods. Majority required in the primary, plurality is allowed in the runoff. A runoff is only held when it is arguable from the election results that *either* candidate on the ballot might be a decent choice, perhaps from different perspectives (Range winner vs. Condorcet winner). Majority failure is a reasonable runoff trigger, but some runoffs are more reasonable than others. For example, a primary with three candidates, A, B, and C, with A getting 49% of the vote, B, 26, C, 25, really doesn't need a runoff, if this was a preferential ballot, those are percentages after transfers or additions, and voters could have dealt with the situation that B and C were vote-splitting. Thus it *may* be possible to set some conditions that will sometimes avoid unnecessary runoffs. But I would not like these conditions to be anything other than very reasonable and solid predictions, based on the expressed votes., that the winner would gain a majority in the runoff.

It all becomes unnecessary with Asset. The "runoff" is an election process in which only the public voters, those who collectively represent all the voters in the primary, vote.

Asset is the only system where voters can vote with *total* sincerity, not voting for any candidate whom they do not maximally trust, can bullet vote if they want, and lose no participation in the result. They will know where their personal vote went. (If was single-winner, and also if it was multiwinner and electors take care to reassign votes by precinct (substantially).

I think that what we have to distinguish here is Range as part of the wider system that involves adaptation, and Range as an isolated method.

Sure. But then we think of Range as a kind of blind poll, where the electorate has no sense of itself.

If you consider Range as an isolated method like other methods, which gathers information from voters, churn it through some function, and outputs an aggregate ballot ("society's ballot"), be it ordinal, cardinal or some other format, then Range is susceptible to strategy - the kind of strategy that leads to bad outcomes.

"Susceptible to strategy" must be understood in the context; it means something different with Range than it does with ranked methods. And, I'd submit, it does *not* lead to bad outcomes.

Strategic voting in Range, in theory, damages the outcome. But compared to what? Compared to so-called "sincere" votes. This seems necessary because the very method by which we judge the outcome quality is the sum of "sincere utilities." However, there is a problem. Sincere utilities are not votes; votes are typically normalized.

I've seen a result from Warren Smith that, if I read it correctly, showed better simulation outcome with a *mix* of so-called sincere and so-called strategic votes.

What? Range isn't ideal? That's right. To get ideal results we would need to have some way for voters to not only know absolute, commensurable utilities, but to be required or incentivized, somehow, to vote them. There can be such methods, typically using auctions.

However, Range itself, in practice, is a kind of auction. The voter has a vote to spend. Let the voter spend this vote how the voter sees fit. The voter can place the entire vote in one basket, so to speak. That's a bullet vote for one candidate, 100%, against all other candidates, 0%. Or, strategically the same, the same vote but with intermediate votes where the voter sees them as harmless or even of non-electoral benefit.

Now, where is the "harm?" Well, obviously, if a voter knows that the voter's vote will affect the outcome, the voter can choose the basket to invest in that will cause the best outcome. But, in fact, the voter doesn't know the outcome, or not well enough to make exact predictions. Where the voter *does* have that kind of knowledge, strategy doesn't make much difference.

What is happening is that the voter won't add an additional "approval," if the voter fears that this will damage the outcome from the voter's perspective. And the result might be the election of the voter's favorite. But only if the rest of the electorate, basically a majority or, under Range, the weight of overall preference, agrees!

Strategic voting in Range is what would be sincere expression of strong preference in a ranked method. Prefer A over all others and don't give a fig about the others if A is going to lose? Vote for A and truncate. It seems that many or even most voters will do this if allowed. I really need to look at those San Francisco ballot images, they have stories to tell that don't show in the election results. What is the level of truncation? We know how many voters for minor candidates truncate -- or maybe run out of ranks -- before reaching a frontrunner, *lots* of them. But what we don't know is how many supporters of major candidates (top two) truncate. Its' not generally counted.

However, if it's just one component of a wider system - the feedback method - then it becomes a sort of manual DSV that polls the intent of the voters (if they don't lie or drive it into oscillation etc), and that "greater method" may be a good one. I don't know.

Plurality works much better than we might think because of the greater system. It's still pretty bad! But *usually* it comes up with the right result! And when it fails to do that, *usually* the result isn't terrible. I don't know how much longer we can depend on "usually" being good enough!

From a convenience point of view, some voters may want not to have to care about other voters' positions. "I just want to give my preference", says a (hypothetical) Nader voter who, although a third party supporter, thinks Bush is so bad that among two-party mediocrity, Gore would be preferrable to Bush. Of course, if your point that people naturally vote VNM utilities (or somewhere in between those and sincere utilities) is true, then it would be an inconvenience to ask sincere cardinal opinions of voters, rather than the other way around.

People vote vNM utilities, that's pretty obvious. A vNM utility is somewhere between a "sincere normalized utility" vote and an approval-style vote.

It's not inconvenient to be *allowed* to vote intermediate utilities, it is not a requirement. A voter might vote 100% for their favorite and, with sum-of-votes range, they are done. And in most elections, this is all most voters need to do! And the rest know, usually, who they are.

Range is nothing more or less than allowing fractional votes. Voters don't have to use them!

In any case, ranked methods handle this issue, but note that the ranked methods are once-through methods, not part of a "manual DSV" system.

Ranked methods, though, suffer from *two* problems. We've been discussing the problem of loss of preference strength information, but there is another: the very serious difficulty that many voters face in trying to fully rank. This is why Carroll invented Asset: to allow voters to, if it is what they wanted to do, simply vote for their favorite without losing voting power.

But, we know, systems that only consider preference are flat-out whacked by Arrow's Theorem. And once preference strength is involved, and we don't have a method in place of extracting "sincere preferences with strengths" from voters, we must accept that voters will vote normalized von Neumann-Morganstern utilities, not exactly normalized "sincere utilities," generally. Real voters will vote somewhere in between the VNM utilities -- incorrectly claimed to be Approval style voting -- and "fully sincere utilities." Such a system is claimed by Dhillon and Mertens to be a unique solution to a set of Arrovian axioms that are very close to the original, simply modified as necessary to *allow* preference strength to be expressed.

Systems that only consider preference are "whacked" by Arrow's Theorem to the degree that the best methods come short of it. If we have "IIA except in a few cases", then that may be good enough. It is true, though, that all ranked methods are susceptible to strategy (Gibbard-Satterthwaite).

Sure. But pure ranked methods *all* suffer from the two very serious problems I mentioned. A preference profile, a true profile, is not purely ranked. That's not how the human mind operates.

Ranked methods used as primary stage in a runoff system, though, don't suffer nearly as much from these problems.


But even a single stage runoff can introduce vast possibilities of improvements of the result. The sign that this might be needed is majority failure. ("Majority" must be defined in Range, there are a number of alternatives.) Range could, in theory, improve results even when a majority was found, but, again, we are making compromises for practicality. A majority explicitly accepting a result is considered sufficient.

If a Range vote is a vote with a lesser strength, then Range fails Majority. If it's a partial vote, it doesn't. I think.

Range itself is generally considered to fail Majority; however, there is a problem with definitions. If a voter has the "exclusive preference" that is the basis for "majority preference," but does not vote this preference, then no method can detect an unexpressed preference. However, on the other side, and with Range Voting I consider it telling, we can argue that if a majority has ranked a candidate higher than all others, it has expressed the necessary preference. Now, if this triggers a runoff (because a different candidate has a higher total, and the runoff method is clearly MC compliant, then the system becomes MC compliant, as long as we consider the necessary majority to refer to the last election in the overall process.

If you're going to use Bucklin, you've already gone preferential. Bucklin isn't all that impressive, though, neither by criteria nor by Yee. So why not find a better method, like most Condorcet methods? If you want it to reduce appropriately to Approval, you could have an "Approval criterion", like this:
Simplicity and prior use. I'm not convinced, as well, that realistic voter strategy was simulated. Bucklin is a phased Range method (specifically phased Approval, but you could have Range Bucklin, you lower the "approval cutoff," rating by rating, until a majority is found. (I'll mention once again that Oklahoma passed a Range method, which would have been used and was only ruled unconstitutional because of the rather politically stupid move of requiring additional preferences or the first preference wouldn't be counted.) No, Bucklin isn't theoretically optimal, but my suspicion is that actual preformance would be better than theory (i.e., what the simulations show.) Bucklin is a *decent* method from the simulations, so far. (Most voters will truncate, probably two-thirds or so. If a simulation simply transfers preferences to the simulated ballots, Bucklin will be less accurately simulated. Truncation results in a kind of Range expression in the averages -- just as Approval does to some degree. The decision to truncate depends on preference strength.)

Since we don't have programs to check how often various methods fail different criteria, I'll grant the part about criteria. However, Yee diagrams show very simple voting situations: there are candidates in "issue space" and people prefer candidates closer to them to candidates farther away. The Gaussian distribution of voters on a point might be contested, but that's about it. Bucklin produces quite strange Yee diagrams (though not the fragmented mess of IRV), so I'd say that if we had the chance to switch to another method, Condorcet would be quite a bit better at only slightly greater complexity (unless you want to go all the way to Schulze).

I should look at the Yee diagrams. Bucklin incorporates a certain discontinuity because of the fact that a majority can occur at different integrations of the ranks. Bucklin, aside from that, though, is Approval, with a device that probably will encourage sincere voting. Yee diagrams don't show social utility, they show possible chaos in results. IRV not being monotonic is indeed a mess. But Bucklin is monotonic.

To really judge Bucklin's performance requires a better simulation of "additional approval." In real elections, the decision to add additional preferences is a complex one. You cannot simply assume, for example, that voters will use all the ranks. Most won't. Most, apparently, won't even use the second rank. And that makes perfect sense, and shouldn't harm outcomes! The only problem arises when the method terminates with a plurality.

We've made a mistake in considering Plurality a method, it is a *class* of methods, or a specific election rule. Almost all methods generally proposed, excepting those which incorporate runoffs, are plurality methods. Even full-ranking required methods produce a majority only by coercing voters into voting for all but one candidate; the "majority" which is, then, necessarily produced is not one coming from free consent to the election.

A Condorcet method with voluntary ranking, though, could certainly be used as a primary for possible runoff. With voluntary ranking, we could either assume that voters, by ranking a candidate, are consenting to the election (in which case it is a kind of ranked approval, in a sense), or a dummy candidate can be used to indicate the approval cutoff in the preference order, which is technically superior because it allows voters to express preferences between unapproved candidates.

If each voter has some set X he prefers to all the others, but are indifferent to the members among X, there should be a way for him to express this so that if this is true for all voters, the result of the expressed votes is the same as if one had run an approval election where each voter approved of his X-set.
A Range ballot provides the opportunity for this kind of expression. It's actually, potentially, a very accurate ballot. If it's Range 100, it is unclear to me that we should provide an opportunity for the voter to claim that the voter prefers A to B, but wants to rate them both at, say, 100 -- or, for that matter, at any other level. What this means is that the voter must "spend* at least 1/100 of a vote to indicate a preference. That's practically trivial. (It could be argued that the "expense" should be higher. It's also possible that the Range ballot isn't linear -- Oklahoma was not. But I won't go there now.)

Yes, Range passes that criterion, since voters can vote "Approval style". I also think Bucklin and QLTD passes it.

From the Range ballot, once can infer ranked preferences (equal ranking allowed). There is no particular motivation to rank insincerely. What motivation exists for is "exaggerating" -- allegedly -- preference strength. If there is Condorcet analysis, then this is blunted just a little. Thus, if you have a significant preference, there is motivation to express it, either accurately or "just a little" or somewhere in between.

If you have Condorcet analysis, the incentive to exaggerate is to say A (100) > B (99) > C (1) > D (0) instead of, for instance, A > B (75) > C (30) > D (2).

Sure. Except what is the "instead of" set of ratings? I think we need to remember that these are *votes* and *not* sentiments. "Ratings" is a convenient way to talk about fractional votes, but what the voter is doing is expressing some combination of utility and probability assessment. The voter wants, naturally, to put votes where they count, where they make a difference. The voting pattern described -- both of them -- preserve preference order; however, the voter, with the first pattern, we may speculate, sees the important pairwise election as involving the (A,B) vs (C,D) pair, we can't really tell more than that. The voter prefers A but quite clearly is willing to accept B. B might be a frontrunner. If C is a frontrunner, that would, as well, explain the low vote of 1 point. In Approval, it's obvious how this voter would vote. If there is a runoff that considers pairwise preference expression, there is then some motivation to allocate that 1 point in order to express a preference, just in case -- or for independent reasons, such as the allocation of ballot position in future elections, or public campaign funding.

The voters are in control of the input to the system; they are making *decisions,* not expressing preferences as such. They are tossing weights on a set of scales according to some simple rules. Toss anything from nothing up to one full vote's weight in each candidate's scale. Candidate with the most weight wins.

Condorcet analysis, added to a Range system, incentivises, not exaggeration, as Kristofer stated, but maintenance of preference order, which will bring the vote *closer* to being a more accurate representation of the voter's actual raw utilities, normalized. It doesn't cause the voter to "exaggerate;" that motive comes from approval strategy. Want to balance this? Lower resolution range increases the cost of maintaining preference. At Range 2, the cost is high: one-half vote. Probably too high.

Bucklin has no cost to maintaining preference, so preference will be maintained: if we allow equal ranking, it won't be used unless the voters actually have no significant preference between the candidates.

(There is a potential Range method which the fractional vote analogy to Bucklin, where there is a voting power cost to lowering the rank of a candidate. This was actually done in Oklahoma, except they didn't allow multiple votes in the top two ranks. So a Range/Bucklin would be counted in rounds, where the top rating is counted, then the next, then the next, continuing until a majority of ballots have been found to contain a vote for the winner, or all votes have been counted, in which case -- if there is a plurality rule -- the candidate with the most votes wins.)

If it's CWP, the picture gets more complex as I'm not sure what the optimal strategy is there. If you use the "Approval plus Condorcet rank, no ordering among disapproved", then there's not much incentive to exaggerate among the approved; instead, the strategy involves setting the Approval cutoff just right.

Borda essentially enforces this, the problem with Borda is that assumption of equal preference strength. It's been pointed out that with many candidates -- a "virtual candidate system" has been proposed -- Borda becomes, in effect, Range, very much like the Range I just proposed.

There would have to be virtual candidates, and those virtual candidates would never be elected (even if one of them got highest Borda score). Otherwise, Borda's extreme weakness to burying would come into play and people would do

[favorite] > [nobodies] > [opponent]

which would lead to one of the nobodies winning if people disagree about favorites.

It was purely a theoretical concept: Borda becomes Range if there are many candidates, across the spectrum. That is not a reasonable assumption.

However, Borda with equal ranking allowed (with empty ranks resulting) is clearly a pure Range method. We can then see that any argument that Borda is superior to Range must mean that the analyst thinks the voters must be constrained. ("The method is for honest men.... and we are going to make sure by not allowing them to be dishonest." Surely there is some kind of weird thinking here!

(The analysis was for sincere voting only, so, unless the voters preferred nobodies, a shot in the dark, to the opponent, the voting pattern shown wouldn't happen. Absolutely, that's the vulnerability of Borda; with Range, what we'd get, with strategy that is just as strong but which is actually sincere, in that no expressed preferences are false, is favorite > opponent = nobodies, at the extreme.)


Sure. Setting conditions for runoffs with a Condorcet method seems like a good idea to me. One basic possibility would be simple: A majority of voters should *approve* the winner. This is done by any of various devices; there could be a dummy candidate who is called "Approved." To indicate approval, this candidate would be ranked appropriately, all higher ranked candidates would be consider to get a vote for the purposes of determining a majority.

So, an approval cutoff. For a sincere vote, what does "approved" mean here? Is it subject to the same sort of ill definition (or in your opinion, "non-unique nature") that a sincere vote for straightforwards Approval has?
It has a very specific meaning for me: it means that the voter would rather see the approved candidate win than face the difficulties -- and risks -- of additional process. It is a *decision*. Approval votes cannot be derived from a preference profile alone. They *can* be derived from Dhillon-Mertens normalized VNM utilities. That's why Dhillon and Mertens did propose Approval as a possible implementation of Rational Utilitarianism. Consider them rounded-off VNM utilities. ("VNM utilities" sounds complicated. It isn't, unless one insists on *numbers*. It's how we normally make decisions! We weight outcomes with probabilities. Instinctively.)

Okay, but if you're going to use VNM utilities, don't you need to put the election method inside a greater feedback system so that they normalize correctly?

No. I should have stated that the utilities are normalized. Now, strictly, Relative Utilitarianism normalizes over a complete, universal candidate set, and I don't know how this translates to a limited-set election. (It works if the set of candidates on the ballot are the possible universe.)

The vNM utilities, I've been assuming are normalized to the set of candidates the voter considers reasonably possible. That would be all candidates on the ballot, plus any write-ins that the voter considers reasonable to include. (A "reasonable" write-in, if the voter considers the candidacy impossible, simply gets the same utility as the nearest candidate who *is* considered reasonable. Because the theoretical probability of election is never zero, there is always a finite gap maintained. Pure vNM utilities, if I'm correct, do maintain preference order.

And, as I mention, it's possible, then, with fairly minor tweaks, to move toward Range. If there is a Bucklin Range ballot, the ballot itself is a Range ballot, thus we are collecting that crucial data and we can monitor election performance. The door opens again.

I've snipped most of the paragraph as I think I've answered it with my attempt to distinguish once-through methods from those that need explicit feedback. As an aside, I wonder how one would aggregate and publish the data so that ballots can't be used as "fingerprints" in vote selling. Fingerprinting would be like this: someone tells you to vote Y at value 63% (and all others at specified other values). They then go and check if any such ballot was registered. Since cardinal ballots are fine-grained, the chance of collision is slight. Hm, that may be an interesting algorithmic problem - it would probably involve rounding off the votes so that there are at least p ballots with the same (rounded-off) value for the same candidate...

Here is the kicker: Suppose I want to coerce (or buy) your vote. I simply say to you, I have insider access to the ballots. I want you to write in so-and-so. If you don't, bad consequences.

Tell me, does it matter if the coercer can actually see the ballots? The above method should work now; in theory it should work even without ballot access, but, in fact, clerks don't report isolated write-in votes. (So sue them!)

I'm not actually proposing fine-grained Range for public use. I'm not really proposing Range at all, immediately. Just Open Voting and American Preferential Voting, er, Bucklin.


There's also the somewhat strategy resistant variant that has been proposed earlier: voters input ballots that rank some or all candidates. All ranked candidates are considered "approved". Break Condorcet cycles by most "approved" candidate (or devise something with approval opposition to preserve clone independence, etc). The point, at least as far as I understood it, is that you can't bury without giving the candidates you're burying "approval", thus burial is weakened.
Sure. That, in fact, is Bucklin! Ranking a candidate is approval of the candidate. (But Bucklin, itself, doesn't do Condorcet analysis.)

It's not really Bucklin, since the approval is in one go of all the candidates you ranked, whereas in Bucklin, the approvals are added in as the method proceeds. The approval cutoff would be - for sincere votes, at least - "these are good enough that I want to distinguish between them, but those are all bad".

In Bucklin, all ranked candidates are considered "approved," but the approvals are phased in to allow detection of a higher ranked majority preference.


Want perfect? Asset Voting, which bypasses the whole election method mess! Single-vote ballot works fine! And that's what many or even most voters know how to do best.

Or have a parliament and bypass the whole thing.
No, you still have the question of how to get the parliament. Asset Voting, actually, is the bypass. It can elect a parliament that is rigorously "proportional" -- more accurately, it is fully representative, with representation being created by free choices.

If you have a parliament, you can use a multiwinner method. Multiwinner methods are also vulnerable, but absent consistent errors (vote management), distortion in what winners it picks is not as bad as with single winner methods. If a single candidate gets replaced by someone else in a council of 100, that's 1% error. If a single candidate gets replaced by someone else in a single-winner election, there is no greater error (except if the candidate was replaced by an even worse one).

I've made the point many times about STV; the errors of the method are confined to the point where eliminations begin. Multiwinner STV is *much* better than IRV, the more winners the better it is. *However* Asset is just about perfect, and is simpler to vote. (It was, and could remain, STV; but if used for IRV, it makes the method into something that could be better than Range.... first preference candidates aren't actually "eliminated," unless the lower preference is used. Lower preferences, then, may decline somewhat, but with no cost to performance -- just more need for further deliberative process, which is probably *good*. I.e., the average knowledge of the public electors re the candidates is probably higher than that of the average knowledge of the voters who voted for them.

Sure. FairVote screwed up royally, hitching their sleigh, not to a star, but to a cinder, the *worst* kind of STV, single-winner, on the theory that it would pave the way. It could block the road! Bucklin was used multiwinner, but I'm not sure that the method was optimal. Probably not. Could be done, though. Use Range ballots, though....

I'm not sure how Range Bucklin could be turned into a multiwinner method. My multiwinner version of Bucklin is this: Keep adding votes until someone exceeds a Droop quota. Elect the person and reweight the strength of those who contributed to his victory, according to this formula: (new weight = old weight * (votes for winner - quota)/(votes
for winner)). Then remove the winner from all ballots and restart the count.

One of the first Bucklin elections was multiwinner. I just saw it the other day. Five winners. I think it was basically plurality-at-large; every voter had five votes, and the top five vote-getters won.

I have not researched true PR methods using a Bucklin type ballot. It should be possible. I'm not sure that it would be better than Proportional Approval Voting or Reweighted Range Voting, but I am sure that none of this would equal what Asset could do.

As an example, say there's an election for three candidates out of four. A wins the first count, then the voters are reweighted. The election is turned into "A wins, plus [election for two candidates out of three]". Nicely recursive.

The problem with Range Bucklin is that we're no longer certain that the votes *can* sum up to the Droop quota. Consider the case where all voters use non-normalized cardinal ballots and nobody's maximum range exceeds 1/10 of max score. The scores may well never reach the Droop quota.

The quota, in that case, would have to be defined upon the votes actually cast! Pretty strange election you've just thought of!


But Range Voting, a ranked form, was written into law in the U.S., I think it was about 1915. Dove v. Oglesby was the case, it's findable on the net. Lower ranked votes were assigned fractional values; I think it was 1/2 and 1/3. Relatively speaking, this would encourage additional ranking, I'd expect.

By that reasoning, any and all weighted positional systems are Range. Borda is Range with (n-1, n-2, n-3 ... 0). Plurality is Range with (1, 0, 0, ... 0). Antiplurality is Range with (1, 1, 1, ..., 0), and so on.
Yes.
Borda is *clearly* Range. Simply with a weird restriction. Likewise the others. Range with weird restrictions. But, here, I was following my classic analysis: Plurality: Vote one full vote for one only. Candidate with the most votes wins. Approval: Vote one full vote for as many candidates as desired. Candidate with the most votes wins.
Range: Approval with fractional votes allowed.

If it's Range with restrictions, I don't think it would be Range anymore.

In a sense. It is, however, the same basic "construction." Think of it as vote-for-one-rating Range. I'm claiming that it is *useful* to think of Borda as a restricted Range method. Or of Range as a Borda method with some constraints removed.

The most obvious restraint removed is that equal rating is allowed. Then, in parallel to this, empty ratings are allowed. That's all. If there are N candidates, Borda is Range (N-1), but voters are not allowed to use a rating for more than one candidate, and either all ratings are used or the voter loses voting power -- in some proposed Borda implementations.

 Plurality is Condorcet with the restriction that you can only vote for one.

What fractional votes? That depends on the method. Nice one: 0, 1, 2, but expressed as -1, 0, 1. Has a nice majoritarian interpretation: Candidate must get a positive vote to win.
Oklahoma was, I think, 0, 1/3, 1/2, 1.
(I'd have preferred, say, 0, 1/2, 2/3, 1, I think. Oklahoma gave too much weight to the first preference over the second.) But, hey, we will have enough trouble getting full-vote Bucklin in place, enough trouble just to get jurisdictions to Count All the Votes, i.e., to use Open Voting or Approval.

That sounds like Nauru Borda. Nauru's version of Borda had first place count 1 point, second 1/2, third 1/3, and so on.

Exactly, actually: Oklahoma Bucklin, with no majority found in the first rounds, became just that (for up to three ranks; but additional votes in third rank were allowed).


----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to