On 30.1.2012, at 8.46, Kristofer Munsterhjelm wrote:

> We know that if some method X passes all criteria Y does and then some, we 
> can suppose that X is better than Y.

I don't think criteria are black and white in that sense. It is quite possible 
that a method that meets all but one of the "important" criteria that we have 
chosen is worse than a method that does not meet any of those criteria. The 
reason is that some vulnerabilities are not meaningful in practical elections 
while some can make the method totally unusable.

I'd thus measure how well certain method meets some criterion rather than if it 
meeths that criterion absolutely in every theoretically possible sittuation.

In cryptography the overall strength of a system is quite typically as strong 
as the strenght of the weakest link. In the same way the overall vulnerability 
level of an election method is typically close to the vulnerability level of 
the most problematic vulnerability of it. That means that the key target is to 
improve the worst vulnerabilities, not to try to reduce the number of 
vulnerabilities against some chosen list of criteria, nor to agree on which 
criteria must be abolutely met fully (unless there are some criteria where 
every theoretical vulnerability is automatically also a serious problem also in 
practical elections).

> (At this point, some people try to get around methods failing certain 
> criteria by saying "sure, it fails, but it doesn't fail where it counts".

"Doesn't fail where it counts" or "the vulnerability is not too bad".

> But it can easily lead to a lot of back-and-forth about what "where it 
> counts" really means and what one really wants of an election method.

Yes, unfortunately so. My approach is to look at the environment where one 
wants to use some particular method, and then see the level of damage that the 
vulnerabilities cause in that particular environment.

(There is no universally best election method. Different methods are good for 
different needs. That's why one can not determine the best method without 
knowing what the method is used for and in what kind of an environment.)

> Pass/fail, in contrast, is completely unambiguous. Either a method passes or 
> it doesn't, and if a method passes a criterion everywhere, then obviously it 
> passes it "where it counts", no matter where that might be.)

Yes, full comptibility is full compatibiliy everywhere, but using only on/off 
criteria does not give as accurate results as measuring criterion compatibility 
using some richer scale. For example some "black" results may be actually 
"white" in real life situations (i.e. vulnerable in theory but not in practice).

All interesting methods fail at least one criterion that sounds important and 
that is also important in the sense that some bad violations of it are in some 
methods also very bad in practice. That however doesn't mean that all methods 
would be useless or bad. One needs a balanced approach to all criteria in the 
spirit of making the overall vulnerability of a method small. This may 
sometimes mean allowing minor vulnerabilities somewhere in order to 1) make 
some other vulnerabilities less serious, or to 2) make the positive properties 
of the method better (e.g. to pick the best winner instead of resorting to some 
less good alternative, maybe as a result of optimizing the strategy resistance 
of the method too far).

Juho



----
Election-Methods mailing list - see http://electorama.com/em for list info

Reply via email to