At 12:20 PM 4/10/2010, Kevin Venzke wrote:
To me the only thing that matters is whether there is even one scenario
that could realistically arise and which is bad.

Two problems: what does "realistically" mean? and

What's "bad."

Let me propose an answer. If a good model is built of voter behavior, given simulated absolute utilities, a model which could be tested by feeding it reasonable assumptions from polls, political contribution patterns, voter turnout, and actual election results, with more detailed ballot data being a great help, it then becomes possible to simulate large numbers of elections, with variables constrained within, or not insanely far from, what is found in actual voting situations. From this, the frequency of some pathological scenario can be estimated.

But even more importantly, the damage from that scenario can be estimated. A "pathological scenario" that seems *awful* based on assumptions about what is necessary in elections, if it actually improves utility, isn't harmful at all! It simply shows that the criterion that wasn't satisfied was defective.

So, to start, in considering possible harmful scenarios, don't assume that the scenario is *realistically* harmful unless it can be connected with some utility profile that would actually function as assumed, *and* that would result in significant loss of overall utility.

Sure, you can show technical failure of a criterion by making up any scenario you like. And that fact is used by FairVote to avoid the implications of center squeeze, by arguing that no voting system is perfect, and, besides, the scenario is "not known to have occurred in any election," which, with center squeeze, was always known to be false, by analogy with top-two runoff, which IRV supposedly simulates, and which is now known to be a realistic possibility because of the recent Burlington election, where it happened.

Still, the FairVote objection, if it had been about some other criterion failed, does have a level of validity, provided that failing scenarios are made up without any attention being paid to realistic voting patterns. That's not true for Center Squeeze, which is well-enough known to be covered by Robert's Rules of Order in its criticism of "instant runoff voting" (they call it preferential voting, but they use a true-majority-required single transferable vote method, and then criticize it because of center squeeze as well as on the loss of additional information, valuable to the voters, of the results of earlier polls in a repeated ballot election, and they require the election to be repeated if no candidate (because of truncation) gains a majority of the votes, i.e., a majority of all ballots cast that aren't "blank" contain a vote for the winner. FairVote has lied about this for years.)

 It seems like there
is a hold up with the fact that B voters won't vote for C. So feel free
to change the scenario to:

49 A
5 B
19 B>C
27 C>B

It remains bad.

There is absolutely no way to tell that an outcome is bad unless underlying utilities are studied.

This is a classic vote-splitting outcome. What if there were a face-off between A and B? Voting systems students, unless they are wise to this point, will say, of course, B will win. Though only by 51:49. They don't generally consider that turnout will depend on preference strength. For the A voters, from the bullet votes, it's easy to infer high preference strength, relatively. We don't know about the B and C voters and how strongly they prefer B to A, except we can infer strong preference for 5 of them.

Suppose that some section of the C voters -- this is most likely with them -- have some utility profile like:

C: 10
B: 1
A: 0

Will they bother to vote? If they do vote, sure, they will vote for B in an A/B faceoff. But in a real election, many won't actually vote.

Now, was this election for A a "bad outcome?"

I'm just going to make up some utilities that would fit the votes. I'm not going to use weak votes, though an accurate analysis would. (i.e., I will normalize absolute utillities to one full vote for each voter).

                A       B       C
49 A            10      0       0
5 B             0       10      0
19 B>C          0       10      5
27 C>B          0       5       10

totals          490     375     365

Now, please explain to me why A is a "bad" outcome? I assumed utilities that were effectively sincere normalized Range votes in a Range 2 election.

 I can't see what criticism would remain of this, other
than saying that in a real election we might be lucky enough to have
some A voters vote A>B and accidentally give the election away.

Well, a criticism remains! And we wouldn't be "lucky."

Sure, I can imagine scenarios where something else would happen. The values I used above were extrapolations from an assumption that 50% was considered the minimum rating for "approval," and that voting for a candidate was that approval. It's the "expected value" of the election.

What if I consider the worst case, still with the 50% approval level? A scenario maximally strong for B (C minimized):

                A       B       C
49 A            10      4       0
5 B             0       10      0
19 B>C          0       10      1
27 C>B          0       9       10

totals          490     679     289

And minimally strong for B (C maximized)

                A       B       C
49 A            10      0       4
5 B             0       10      4
19 B>C          0       10      9
27 C>B          0       1       10

totals          490     267     657

My point is that from a mere preference profile, one cannot determine who the "best" winner of an election is. Preference strength information is necessary. Doing this in a real election can be difficult without additional constraints, and the most common one is the requirement for majority approval.

> > By the way, I do want to maximize the sum of
> utilities. I just don't
> > think you can be so direct as to ask for them.

Perhaps, butI now disagree, there is a way to ask for them, and get sincere utilities in response. It takes the possibility of more than one ballot. Something that has been overlooked is that a series of repeated ballots tests absolute preference strength.

>
> Sum of utilities would be a good approach for many uses if
> we could get the personal utilities. In some non-competitive
> elections and polls we can get them but political elections
> may be a more difficult environment.

Well, I have assumptions about what tends to maximize utility. The ideal
scenario for me is that the median voter has 3+ viable options to pick
from (and not find the choice obvious).

An assumption about what tends to maximize utility can be way, way off. That's what I believe I showed above. With a simple, normalized utility Range 2 election, that made the votes rational without making additional assumptions, A was, by quite a margin, the best winner from utility analysis. it took more extreme analysis, using a Range 10 election, to produce results favoring (by high margin) B or C.


> (There may also be some extreme situations where the sum of
> utilities is not what we want. For example it might make
> sense to improve the utility of all voters worth 10 points
> rather than improve the utility of all but one voter with 12
> points and then kill or otherwise cause a major decrease in
> utility to that remaining one voter.

Eh? It might make sense to appoint our favorite the dictator, as well. Why bother with these stinkin' elections?

 In this case one could
> btw also consider allowing the voters to give ratings like
> "minus infinity" to avoid this kind of situations, i.e.
> ratings would not be based on a fixed range but some wider
> scale, maybe indicating that 0 means "neutral", 100 means "I
> like a lot" etc.)

Yes. If we could perceive utility more clearly I might have to clarify
my opinion.

We can. I assumed normalized utilities above, which is a decent starting place, because there is no basis for assuming anything else. We know that the voters -- assuming they were free -- had enough preference strength to be bothered to vote in the election. Whether that's high or low depends on many factors. Was the election on the same ballot as a high-value election to the voters, but they have low value or even no value for this one, but voted out of habit? Or was this very important, and to which voters?

In any case, the sound approach, in the end, is to study *absolute* utility profiles. I'm not saying that the sound approach is easy! We can, for the time being, make the "equal voter" assumption, and thus normalize utilities, but we must keep in mind that this is actually an unrealistic assumption, and there is ample proof in real election turnout figures that the assumption doesn't hold. Besides being preposterous!

Simulations would properly start with the entire electorate, with utilities assigned via relatively realistic probability distributions on the "heaven-hell" scale. Then there are two kinds of elections: "happen to be there" elections, where the voter is voting regardless of absolute preference strength (over the entire set of reasonable candidates), and true "voluntary elections," where the voter will turn out to vote because of relatively high preference strength, on average, and not if the preference strength is low.

This really does affect runoff elections, I'm sure. It is a likely reason for "comeback" elections, I suspect, which seem to occur in about 1/3 of non-partisan runoffs.

While the scenarios I bring up show some strategy, I consider the strategy
natural and not something consciously plotted in advance. In fact if you
called the truncating voters strategic, I think they would be offended
and not see it that way at all.

I projected utilities from the votes that assumed that the votes were sincere. That's why the first scenario was Range 2. We don't have more data than that. Then I went to the best and the worse, for B and C.

I didn't show the additional possibilities, that if we were to look at absolute utilities, we might see something like this (I'll maximize for A, that is, I will normalize absolute utilities on the scale determined by the A voters, and then assume maximally favorable utilities for A that still explain the B and C voter patterns):

                A       B       C
49 A 10 0 0 (A voters have strong feelings, B/C are the same for them)
5 B             7       10      8 (This gives motive to truncate for B alone)
19 B>C          7       10      9 (This gives motive to add C)
27 C>B          7       9       10

totals          847     483     481

Now, make it worst for A. I'll set this up to maximize B, C low:

                A       B       C
49 A            10      9       0
5 B             0       10      0
19 B>C          0       10      5?
27 C>B          0       9       10

totals          490     924     365

Again I don't think voters will view truncation as a strategy, and will
always do it (especially with more candidates).

Real voters tend to vote for one, in most elections, unless they have party labels to guide them. It's about the limited information they have. This is the problem that Carroll addressed more than 120 years ago with Asset Voting. Truncation isn't so much a strategy as a necessity.

----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to