[EM] Simulating multiwinner goodness

2010-03-11 Thread Brian Olson
There was a question on the list a while ago, and skimming to catch up I didn't 
see a resolution, about what the right way to measure multiwinner result 
goodness is.

Here's a simple way to do it in simulator:
Each voter has a preference [0.0 ... 1.0] for each candidate. Measuring the 
desirability of a winning set of candidates is simply summing up those 
preferences, but capping each voter at a maximum of 1.0 satisfaction.

Unfortunately, this won't show proportionality. If 3/4 of the population have a 
1.0 preference for a slate of 3/4 of the choices, we would measure electing one 
of them as being just as good as electing the whole set.

So, we could apply the quota. If a candidate is elected by 3 times the quota, 
only apply 1/3 of each voter's preference for that candidate to their happiness 
sum.

Now the huge coalition with their slate elected should each add up to about 1.0 
happiness, and smaller coalitions should get theirs too.

This is sounding a bit like an election method definition, and I expect that 
this definition of 'what is a good result' does pretty much imply a method of 
election. At worst, given ratings ballots that we can treat as the simulator 
preferences, for not too large a set of winning sets of candidates, get a fast 
computer and run all the combinatoric possibilities and elect the set with the 
highest measured sum happiness.

Another thing we could measure in multiwinner elections (and possibly single 
winner) is the Gini inequality measure. If we have a result with both pretty 
high average happiness and low inequality, that's a good result.

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Terry Bouricius
Brian,

But obviously, real world satisfaction with an election outcome is not so 
straight forward. I may favor a certain slate of candidates, but feel huge 
dissatisfaction if they all win, such that there is no opposition in the 
legislative body to keep them honest. This is what happened for many in 
British Columbia in 2001, when the Liberal Party won 77 out of 79 seats in 
the Provincial legislature.

The utility, or hoped for happiness measurement before the election may be 
changed BY the election results themselves. While this is especially true 
of multi-seat elections, it is even true of single seat elections. I may 
want my candidate to win, but be disappointed if she wins despite the fact 
that a majority of voters preferred another candidate (due to a feature of 
the voting method)...My preference for majority rule may trump my 
candidate preference.

Terry Bouricius


- Original Message - 
From: Brian Olson b...@bolson.org
To: Election Methods Mailing List election-meth...@electorama.com
Sent: Thursday, March 11, 2010 7:35 AM
Subject: [EM] Simulating multiwinner goodness


There was a question on the list a while ago, and skimming to catch up I 
didn't see a resolution, about what the right way to measure multiwinner 
result goodness is.

Here's a simple way to do it in simulator:
Each voter has a preference [0.0 ... 1.0] for each candidate. Measuring 
the desirability of a winning set of candidates is simply summing up those 
preferences, but capping each voter at a maximum of 1.0 satisfaction.

Unfortunately, this won't show proportionality. If 3/4 of the population 
have a 1.0 preference for a slate of 3/4 of the choices, we would measure 
electing one of them as being just as good as electing the whole set.

So, we could apply the quota. If a candidate is elected by 3 times the 
quota, only apply 1/3 of each voter's preference for that candidate to 
their happiness sum.

Now the huge coalition with their slate elected should each add up to 
about 1.0 happiness, and smaller coalitions should get theirs too.

This is sounding a bit like an election method definition, and I expect 
that this definition of 'what is a good result' does pretty much imply a 
method of election. At worst, given ratings ballots that we can treat as 
the simulator preferences, for not too large a set of winning sets of 
candidates, get a fast computer and run all the combinatoric possibilities 
and elect the set with the highest measured sum happiness.

Another thing we could measure in multiwinner elections (and possibly 
single winner) is the Gini inequality measure. If we have a result with 
both pretty high average happiness and low inequality, that's a good 
result.

Election-Methods mailing list - see http://electorama.com/em for list info



Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Jonathan Lundell
On Mar 11, 2010, at 4:35 AM, Brian Olson wrote:

 There was a question on the list a while ago, and skimming to catch up I 
 didn't see a resolution, about what the right way to measure multiwinner 
 result goodness is.
 
 Here's a simple way to do it in simulator:
 Each voter has a preference [0.0 ... 1.0] for each candidate. Measuring the 
 desirability of a winning set of candidates is simply summing up those 
 preferences, but capping each voter at a maximum of 1.0 satisfaction.
 
 Unfortunately, this won't show proportionality. If 3/4 of the population have 
 a 1.0 preference for a slate of 3/4 of the choices, we would measure electing 
 one of them as being just as good as electing the whole set.
 
 So, we could apply the quota. If a candidate is elected by 3 times the quota, 
 only apply 1/3 of each voter's preference for that candidate to their 
 happiness sum.
 
 Now the huge coalition with their slate elected should each add up to about 
 1.0 happiness, and smaller coalitions should get theirs too.
 
 This is sounding a bit like an election method definition, and I expect that 
 this definition of 'what is a good result' does pretty much imply a method of 
 election. At worst, given ratings ballots that we can treat as the simulator 
 preferences, for not too large a set of winning sets of candidates, get a 
 fast computer and run all the combinatoric possibilities and elect the set 
 with the highest measured sum happiness.
 
 Another thing we could measure in multiwinner elections (and possibly single 
 winner) is the Gini inequality measure. If we have a result with both pretty 
 high average happiness and low inequality, that's a good result.

As with any choice system based on cardinal utility, there end up being two 
problems that are not, I think, amenable to solution. One is the 
incomparability of individual utility measures from voter to voter (and here 
we're talking about utility deltas, since the utilities are normalized to 
max=1.0). The other is that, even if comparability were solved, we don't have a 
means of, in the individual case, determining what they are.

In particular, reported utility isn't very useful, since for the system to 
work, we need sincere utility, and a utility-based system provides every 
incentive to strategize. And, as Terry suggests, it's not clear what we *mean* 
by utility here. Happiness with what? The outcome of the individual election? 
The makeup of the resulting legislature? The legislation resulting from that 
legislature?

And even if we could somehow measure the voter's ultimate happiness as a 
function of legislative outcome and come back in time and cast a vote, we don't 
have utilities for the counterfactual alternatives.

However attractive it might be to fantasize about functions from cardinal 
utility to social choice, it comes down to an attempt to square a circle or 
invent a perpetual motion machine. The attemp might be fun, but we know a 
priori that it will fail.

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Brian Olson
A couple years ago I moved from the California Democratic Party Machine to the 
Massachusetts Democratic Party Machine.
I'm not sad when my party wins, I'm sad when they run boring stick-in-the-mud 
establishmentarian candidates.
I'd love pressure from other parties to keep them honest, and that's what a lot 
of this whole election method reform thing is about.

Anyway, an election method can't (directly) give me better choices, but just 
help me and the rest of society choose from the options available at the time. 
I posit that a better choice method will (eventually) encourage the 
availability of better choices. The current pick-one-primary and 
pick-one-general favors the boring old establishment too much. If I can safely 
vote for the obscure but awesome candidate as my first choice, and the safe 
establishment choice as second or third, I think we'll se more interesting 
little guys, and sometimes they'll win.

But we knew all that.
/advocacy

On Mar 11, 2010, at 9:42 AM, Terry Bouricius wrote:

 But obviously, real world satisfaction with an election outcome is not so 
 straight forward. I may favor a certain slate of candidates, but feel huge 
 dissatisfaction if they all win, such that there is no opposition in the 
 legislative body to keep them honest. This is what happened for many in 
 British Columbia in 2001, when the Liberal Party won 77 out of 79 seats in 
 the Provincial legislature.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Brian Olson
On Mar 11, 2010, at 11:29 AM, Jonathan Lundell wrote:

 As with any choice system based on cardinal utility, there end up being two 
 problems that are not, I think, amenable to solution. One is the 
 incomparability of individual utility measures from voter to voter (and here 
 we're talking about utility deltas, since the utilities are normalized to 
 max=1.0). The other is that, even if comparability were solved, we don't have 
 a means of, in the individual case, determining what they are.

Arrow made the same mistake. We can't compare interpersonal utility, but in 
practice we do. We set everyone's utility to One. One person one vote. That's 
how much you get.

 In particular, reported utility isn't very useful, since for the system to 
 work, we need sincere utility, and a utility-based system provides every 
 incentive to strategize. And, as Terry suggests, it's not clear what we 
 *mean* by utility here. Happiness with what? The outcome of the individual 
 election? The makeup of the resulting legislature? The legislation resulting 
 from that legislature?

Reported utility is vulnerable to all kinds of noise, imperfect reporting, 
imperfect introspection, and so on. And yet this can be simulated. We can make 
sim people who are perfectly knowable, add that noise, run the election, and 
see what happens both compared to the noisy utility and true utility. When I 
did this it turns out there are some methods less vulnerable to noise! 
(Condorcet better, IRV, with it's non-monotonic threshold swing regions is more 
vulnerable to noise.)

 And even if we could somehow measure the voter's ultimate happiness as a 
 function of legislative outcome and come back in time and cast a vote, we 
 don't have utilities for the counterfactual alternatives.
 
 However attractive it might be to fantasize about functions from cardinal 
 utility to social choice, it comes down to an attempt to square a circle or 
 invent a perpetual motion machine. The attemp might be fun, but we know a 
 priori that it will fail.

Are we talking about real people or sim people? I think we can make simulations 
and models that are useful. Lots of people keep trying, including me. Or are 
you sayng that we can't reasonably make sim people whose knowable sim qualities 
bear any useful resemblance to the real world? We're talking about all kinds of 
mathematical properties of election methods, why not various measures under 
stochastic test? What would be a good measure?

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Jonathan Lundell
On Mar 11, 2010, at 8:50 AM, Brian Olson wrote:

 On Mar 11, 2010, at 11:29 AM, Jonathan Lundell wrote:
 
 As with any choice system based on cardinal utility, there end up being two 
 problems that are not, I think, amenable to solution. One is the 
 incomparability of individual utility measures from voter to voter (and here 
 we're talking about utility deltas, since the utilities are normalized to 
 max=1.0). The other is that, even if comparability were solved, we don't 
 have a means of, in the individual case, determining what they are.
 
 Arrow made the same mistake. We can't compare interpersonal utility, but in 
 practice we do. We set everyone's utility to One. One person one vote. That's 
 how much you get.
 
 In particular, reported utility isn't very useful, since for the system to 
 work, we need sincere utility, and a utility-based system provides every 
 incentive to strategize. And, as Terry suggests, it's not clear what we 
 *mean* by utility here. Happiness with what? The outcome of the individual 
 election? The makeup of the resulting legislature? The legislation resulting 
 from that legislature?
 
 Reported utility is vulnerable to all kinds of noise, imperfect reporting, 
 imperfect introspection, and so on. And yet this can be simulated. We can 
 make sim people who are perfectly knowable, add that noise, run the election, 
 and see what happens both compared to the noisy utility and true utility. 
 When I did this it turns out there are some methods less vulnerable to noise! 
 (Condorcet better, IRV, with it's non-monotonic threshold swing regions is 
 more vulnerable to noise.)
 
 And even if we could somehow measure the voter's ultimate happiness as a 
 function of legislative outcome and come back in time and cast a vote, we 
 don't have utilities for the counterfactual alternatives.
 
 However attractive it might be to fantasize about functions from cardinal 
 utility to social choice, it comes down to an attempt to square a circle or 
 invent a perpetual motion machine. The attemp might be fun, but we know a 
 priori that it will fail.
 
 Are we talking about real people or sim people? I think we can make 
 simulations and models that are useful. Lots of people keep trying, including 
 me. Or are you sayng that we can't reasonably make sim people whose knowable 
 sim qualities bear any useful resemblance to the real world? We're talking 
 about all kinds of mathematical properties of election methods, why not 
 various measures under stochastic test? What would be a good measure?

I agree that simulations can give us insight into the nature of voting system. 
It's the translation of those result to real elections that I object to. The 
sim voter can be interesting in the model without remotely resembling any real 
voter.

(And I don't believe Arrow was mistaken. He was talking about real-world social 
choices, not models.)

Election-Methods mailing list - see http://electorama.com/em for list info


[EM] A monotonic proportional multiwinner method

2010-03-11 Thread Kristofer Munsterhjelm

I think I have found a multiwinner method that is both monotonic and
proportional. I have, at least, found no counterexample.

The method achieves monotonicity by cheating about proportionality:
instead of strictly adhering to the quota, it determines a divisor and
sets up a number of constraints on the output. The idea is similar to
how Webster's method (and other divisor methods) maintain monotonicity
by, in certain cases, violating quota.
Note that I although I think it is likely that one can't have both Droop
proportionality and monotonicity, I have no proof of this.

How does the method work? It has two phases, which I'll call the
constraint phase and the margins phase.
For both phases, we'll need to transform the input set of ballots into a
list of solid coalitions. This list gives all the sets for which at
least one vote preferred the members of the set (in any order) to those
not in the set, and is the same data as is used to determine the outcome
 in Descending Acquiescing/Solid Coalitions.

For example, consider the ballot set
13: ABC
 1: ACB
11: BAC
10: BCA
17: CAB
18: CBA

The solid coalition list is

Coalitionvoters
ABC  70
AB   24
AC   18
BC   28
A14
B12
C27

because 24 voters prefers A and B to everything else (thus voted either
ABC or BAC), 18 voters prefer A and C to everything else, and so on.

The first phase consists of setting up constraints to narrow down which
group of winners we are going to elect. The constraints on each
coalition is:
 at least round(V_i / q) candidates from this coalition must be in
the outcome[1],

where round is the rounding-off function, V_i is the number of
voters supporting coalition i, and q is determined to be the least value
that doesn't lead to a contradiction. A particular choice for the value
of q leads to a contradiction if it's impossible to construct an
outcome that passes all the constraints.

In other words, determine the value of q so that at least one set can
pass the combined set of constraints (at least round(V_i / q) of the
candidates from coalition i must be in the outcome). Call this value,
the divisor. It can be found using binary search in conjunction with
trying all possible outcomes to find out how many pass the constraints,
or (probably) in some more sophisticated manner.

Returning to our example, let's say we're going to elect a council of
size 2.
Our initial options are: elect {AB}, {AC}, or {BC}. The value of q that
satisfies our desiderata is slightly greater than 28, let's say 28.0034.
That value gives the following constraints:

Coalitionvoters   ===elect at least
ABC  70   round(70/28.0034) = 2
AB   24   round(24/28.0034) = 1
AC   18   round(18/28.0034) = 1
BC   28   round(28/28.0034) = 1
A14   round(14/28.0034) = 0
B12   round(12/28.0034) = 0
C27   round(27/28.0034) = 1

(Note here that A is *very* close to getting a seat, as 14/28.0034 =
0.49994. That will become important later.)

Can AB pass? No, because it violates the must have at least 1 of the
{C} coalition criterion. Can AC pass? Yes. Can BC pass? Yes.

So in this example, the constraint phase has narrowed down our choice of
outcomes to AC and BC. But which should we pick? That's where the
margins phase comes into play, and herein lies the trick that makes the
method monotonic:

For some coalition i, define i's /margin/ equal to:
floor(V_i / q) + 0.5 - V_i / q.
Calculate these. For our example:

Coalitionvoters   margin
ABC  700.000307
AB   24   -0.357038
AC   18   -0.142778
BC   28   -0.499877
A140.60
B120.071481
C27   -0.464167

Assign to each possible outcome, the margins of those coalitions with 
which it shares at least one candidate, then sort the margins, lesser 
first. Negative margins have to be altered somehow, but it usually 
doesn't matter how you do it - I just add one to them as that seems to 
be the most natural. Margins for coalitions that don't match are set to 
infinity, so that any margin against a coalition that actually matches 
(shares at least a candidate in common) is better than no match, which 
makes sense.


AC shares at least one candidate with {ABC, AB, AC, BC, A, C}.
BC shares at least one candidate with {ABC, AB, AC, BC, B, C}.

Thus the sorted margins lists are:
 positive margins negative margins, adjustedn/a
AC:  0.607 0.000307 | 0.500123 0.535833 0.642962 0.857222 | infinity
BC:  0.0003066 0.071481 | 0.500123 0.535833 0.642962 0.857222 | infinity


Re: [EM] Simulating multiwinner goodness

2010-03-11 Thread Kristofer Munsterhjelm

Brian Olson wrote:

There was a question on the list a while ago, and skimming to catch
up I didn't see a resolution, about what the right way to measure
multiwinner result goodness is.


[snip]


This is sounding a bit like an election method definition, and I
expect that this definition of 'what is a good result' does pretty
much imply a method of election. At worst, given ratings ballots that
we can treat as the simulator preferences, for not too large a set of
winning sets of candidates, get a fast computer and run all the
combinatoric possibilities and elect the set with the highest
measured sum happiness.


The details of proportional representation isn't well known. 
Proportional representation itself appears to involve a tradeoff between 
accuracy - proportionality of what counts - and quality - how highly the 
individual voters rank a given candidate.


There is something similar for single-winner methods: the question of 
how much to value what few rank very highly in comparison to what some 
rank in the middle; but for single-winner methods, we at least have 
concepts like the median voter and desirable-sounding criteria like 
clone independence and the Condorcet criterion.


What I'm trying to say is that before we can optimize, we must know what 
it is we're going to optimize -- or proceed in a vague direction using 
feedback (as is part of my reason for experimenting with multiwinner 
methods). What would be analogous to the median voter concept for 
multiwinner elections - accurate reproduction of opinion space? 
According to what measure? And so on...



Another thing we could measure in multiwinner elections (and possibly
single winner) is the Gini inequality measure. If we have a result
with both pretty high average happiness and low inequality, that's a
good result.


The proportionality scoring part of my election methods program works 
somewhat like this, according to a very simple model. Every candidate 
and voter has a binary n-vector of ayes/nays (representing binary 
opinions). Voters prefer candidates closer to them (Hamming distance 
wise). Then the proportion of each bit being a yes can be measured 
both for the elected council and for the people in general, and the 
closer the better.


I use either root mean squared error or the Sainte-Lague index for 
measuring error, though my program can also use the Gini (or the 
Loosemore-Hamby index for that matter).


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Smith, FPP fails Minimal Defense and Clone-Winner

2010-03-11 Thread Kristofer Munsterhjelm

Juho wrote:

On Mar 10, 2010, at 7:26 PM, Kristofer Munsterhjelm wrote:


Juho wrote:


I'm not aware of any sequential candidate elimination based method 
that I'd be happy to recommend. One can however describe e.g. 
minmax(margins) in that way. Eliminate the candidate that is worst in 
the sense that it would need most additional votes to win others, 
then the next etc. In the elimination process one would consider also 
losses to candidates that have already been eliminated (I wonder if 
this approach makes it less natural looking than the elimination 
process of IRV).


To my knowledge, Schulze-elimination is the same as basic Schulze. In 
other words, if you run Schulze, eliminate the loser, run it again, 
etc, you end up with the original result. That's not very useful, but 
still...


It might also be that any full-blown candidate elimination method 
(you run the election as if the one that was eliminated never stood) 
with a weighted positional base method (Borda, Plurality, ...) is 
nonmonotonic. I can't prove it though!


One more addition to this elimination discussion. Maybe ability to give 
an ordering of the candidates is more important (and more generic) than 
using an elimination process. The preference graphs that many Condorcet 
methods use may not be as easy to understand to the voters as plain 
ordering is.


In principle single winner methods need not be able to produce any 
ordering of the candidates. It is enough to pick the single winner. But 
in order to make it easy to the voters and candidates to understand the 
results (and to explain e.g. how close some candidate was to winning the 
election) good and simple graphical and numeric information may be 
valuable in practical elections.


Both of the advanced methods give an ordering, as do the obvious ones 
(Minmax, least reversal, Copeland, second order 2-1-Copeland...). They 
don't provide numerical information (this close to winning), but that 
is hard: I read a paper about extending Schulze to do so, and it used 
some rather complicated use of linear programming. Could you sell that 
to the public? Not very likely, unless they happened to be of the same 
kind that voted for the use of Meek in local New Zealand elections.


(I have to add that if people want to keep the USA as it mostly is, a 
two party based system, then I must recommend FPTP :-). And if not, 
then maybe also some additional (maybe proportionality related) 
reforms are needed.)


Wouldn't something like Condorcet multiwinner districts be better? 
Pick a good Condorcet method and send the 5 first ranked on its social 
ordering to the legislature. That would pick a bunch of centrists 
(thus have stability), but it would pick the centrists people 
actually wanted.


Hm, that might not provide a true two-party system, though. One could 
also have a PR system where the number of votes is weighted so that 
parties with broad support gain superproportional power, but then the 
question becomes why one should bother with the PR at all.


Maybe Condorcet + single winner districts is a more stable approach. 
That combination makes a two-party system just somewhat softer, and 
allows the party structure (in individual districts) to evolve in time.


Another approach to systems between proportional representation and the 
two-party approach could be to have a proportional method but use 
districts with only very few representatives (2, 3,...). That would 
provide rough but in principle accurate proportionality and still give 
space only to few major parties. (Obviously my definition of full 
proportionality must be with 1/n of the votes you will get one seat 
(where n = number of representatives).)


An interesting hybrid, I think (and I've mentioned it before), would be 
to have a bicameral system where senators are elected according to a 
statewide Condorcet method (pick a good centrist for each state), and 
the House representatives are elected according to PR.


Having just a single from each state may be /too/ centrist, but to pick 
two senators from each using a proportional ordering might work - as 
long as it doesn't introduce partisan division.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Smith, FPP fails Minimal Defense and Clone-Winner

2010-03-11 Thread Juho

On Mar 11, 2010, at 11:41 PM, Kristofer Munsterhjelm wrote:


Juho wrote:

On Mar 10, 2010, at 7:26 PM, Kristofer Munsterhjelm wrote:

Juho wrote:


I'm not aware of any sequential candidate elimination based  
method that I'd be happy to recommend. One can however describe  
e.g. minmax(margins) in that way. Eliminate the candidate that is  
worst in the sense that it would need most additional votes to  
win others, then the next etc. In the elimination process one  
would consider also losses to candidates that have already been  
eliminated (I wonder if this approach makes it less natural  
looking than the elimination process of IRV).


To my knowledge, Schulze-elimination is the same as basic Schulze.  
In other words, if you run Schulze, eliminate the loser, run it  
again, etc, you end up with the original result. That's not very  
useful, but still...


It might also be that any full-blown candidate elimination  
method (you run the election as if the one that was eliminated  
never stood) with a weighted positional base method (Borda,  
Plurality, ...) is nonmonotonic. I can't prove it though!
One more addition to this elimination discussion. Maybe ability to  
give an ordering of the candidates is more important (and more  
generic) than using an elimination process. The preference graphs  
that many Condorcet methods use may not be as easy to understand to  
the voters as plain ordering is.
In principle single winner methods need not be able to produce any  
ordering of the candidates. It is enough to pick the single winner.  
But in order to make it easy to the voters and candidates to  
understand the results (and to explain e.g. how close some  
candidate was to winning the election) good and simple graphical  
and numeric information may be valuable in practical elections.


Both of the advanced methods give an ordering, as do the obvious  
ones (Minmax, least reversal, Copeland, second order 2-1- 
Copeland...). They don't provide numerical information (this close  
to winning)


At least minmax(margins) does. It gives each candidate the number of  
additional votes that would guarantee victory to them. That is quite  
simple and could be used to e.g. provide information to the voters  
while the counting is in progress (1000 votes still not counted, 100  
first preference votes would be enough to win). Also a simple  
histogram would tell how each candidate is doing at the moment.


, but that is hard: I read a paper about extending Schulze to do so,  
and it used some rather complicated use of linear programming. Could  
you sell that to the public? Not very likely, unless they happened  
to be of the same kind that voted for the use of Meek in local New  
Zealand elections.


(I have to add that if people want to keep the USA as it mostly  
is, a two party based system, then I must recommend FPTP :-). And  
if not, then maybe also some additional (maybe proportionality  
related) reforms are needed.)


Wouldn't something like Condorcet multiwinner districts be better?  
Pick a good Condorcet method and send the 5 first ranked on its  
social ordering to the legislature. That would pick a bunch of  
centrists (thus have stability), but it would pick the centrists  
people actually wanted.


Hm, that might not provide a true two-party system, though. One  
could also have a PR system where the number of votes is  
weighted so that parties with broad support gain superproportional  
power, but then the question becomes why one should bother with  
the PR at all.
Maybe Condorcet + single winner districts is a more stable  
approach. That combination makes a two-party system just somewhat  
softer, and allows the party structure (in individual districts) to  
evolve in time.
Another approach to systems between proportional representation and  
the two-party approach could be to have a proportional method but  
use districts with only very few representatives (2, 3,...). That  
would provide rough but in principle accurate proportionality and  
still give space only to few major parties. (Obviously my  
definition of full proportionality must be with 1/n of the votes  
you will get one seat (where n = number of representatives).)


An interesting hybrid, I think (and I've mentioned it before), would  
be to have a bicameral system where senators are elected according  
to a statewide Condorcet method (pick a good centrist for each  
state), and the House representatives are elected according to PR.


Yes, having also representatives that are non-partisan by nature could  
add something interesting and useful to an otherwise very party  
oriented and divided community.


(One could btw call this kind of representatives widists instead of  
centrists since Condorcet would pick a candidate with wide support  
instead of a candidate that is supported specifically by the centrist  
parties. Or maybe also term centrist has some similar meaning in  
addition to referring to the parties in the 

Re: [EM] Smith, FPP fails Minimal Defense and Clone-Winner

2010-03-11 Thread Raph Frank
On Thu, Mar 11, 2010 at 9:41 PM, Kristofer Munsterhjelm
km-el...@broadpark.no wrote:
 Having just a single from each state may be /too/ centrist, but to pick two
 senators from each using a proportional ordering might work - as long as it
 doesn't introduce partisan division.

You would probably end up getting the centre of each of the 2 parties
if you did that, so it defeats the idea of finding centerists to
cancel out the 2 party system.

You could split states into districts, if you wanted more than 1
senator elected at the same time.

Ofc, districting runs in gerrymandering problems.

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A monotonic proportional multiwinner method

2010-03-11 Thread Warren Smith
Kristofer Munsterhjelm's monotonic proportional multiwinner method
-- a few comments

(1) wow, very complicated.  Interesting, but I certainly do not feel
at present that
I fully understand it.

(2) RRV obeys a monotonicity property and a proportionality property
http://rangevoting.org/RRV.html

(3) assuming we're willing to spend exponential(C) computer time to handle
elections with C candidates, then KM's constraints form a linear program which
in fact would be an 01 integer program if candidates get elected or
not (cannot be 37% elected).  Program has exponential(C) number of
constraints.


-- 
Warren D. Smith
http://RangeVoting.org  -- add your endorsement (by clicking
endorse as 1st step)
and
math.temple.edu/~wds/homepage/works.html

Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] Smith, FPP fails Minimal Defense and Clone-Winner

2010-03-11 Thread Kristofer Munsterhjelm

Raph Frank wrote:

On Thu, Mar 11, 2010 at 9:41 PM, Kristofer Munsterhjelm
km-el...@broadpark.no wrote:

Having just a single from each state may be /too/ centrist, but to pick two
senators from each using a proportional ordering might work - as long as it
doesn't introduce partisan division.


You would probably end up getting the centre of each of the 2 parties
if you did that, so it defeats the idea of finding centerists to
cancel out the 2 party system.

You could split states into districts, if you wanted more than 1
senator elected at the same time.

Ofc, districting runs in gerrymandering problems.


I've thought about this, and it makes sense. Any argument I could use 
against having a division inside states could also be used to argue 
against a division among states (e.g. why have one from each state? why 
not one from a block of states? Thus you should have one from a region 
of each state if you have more than one).


Districting runs into gerrymandering. I think the solution there is to 
let some independent body do the redistricting -- it works in Canada. 
That raises the question of why such hasn't been done already, but I 
think the parties are just too strong. The initial cancelling-out done 
by Condorcet might be enough to pull the system away from that kind of 
entrenchment.


More exotic systems might be possible - for instance, some sort of 
supermajority requirement for councils of two, or a weighted kind of PR 
that pulls the centrists towards the center, or something, but that 
lacks the simplicity of the ideas above.


Election-Methods mailing list - see http://electorama.com/em for list info


Re: [EM] A monotonic proportional multiwinner method

2010-03-11 Thread Kristofer Munsterhjelm

Warren Smith wrote:

Kristofer Munsterhjelm's monotonic proportional multiwinner method
-- a few comments

(1) wow, very complicated.  Interesting, but I certainly do not feel
at present that I fully understand it.


Alright. If you have any questions, feel free to ask.


(2) RRV obeys a monotonicity property and a proportionality property
http://rangevoting.org/RRV.html


My experiments with multiwinner methods seem to indicate that you need 
proportionality not just of single candidates but also of groups of 
them, like satisfied by the DPC or by this.



(3) assuming we're willing to spend exponential(C) computer time to handle
elections with C candidates, then KM's constraints form a linear program which
in fact would be an 01 integer program if candidates get elected or
not (cannot be 37% elected).  Program has exponential(C) number of
constraints.


So do methods like Schulze STV. In any case, I wonder if it's possible 
to make some sort of polytime algorithm for my method, but it would 
probably be quite difficult. One would have to understand the nature of 
the shifting of constraints as the divisor changes to find the 
best-margin council that doesn't contradict, implicitly.


If it's possible, a comparison would be that a method like STV satisfies 
the Droop proportionality criterion even though this is also, 
mathematically speaking, an integer program (every coalition supported 
by more than k Droop quotas should have at least k members in the 
outcome, unless the size of the coalition is less than that).


Election-Methods mailing list - see http://electorama.com/em for list info