Hi again. Back to the question of when we can assume convergence in case of repeated approval polls, I found out the following:
Let me first recall that when voters use 0-info strategy (to maximize expected utility) or Weinstein's strategy (to maximize median utility) and adjust their priors by moving them towards the last poll's winner, convergence is guaranteed since successive winners' approval scores must increase. On the contrary, recall also that when voters use strategy A, the polls need not converge even if the priors converge. Now, here comes more evidence that adjustment strategies which take into account more than just the polls' winners are unlikely to yield convergence: Assume each voter i maximizes expected utility by using 0-info strategy, starts with some initial set of priors p(x,i,0), and all voters adjust their priors after each poll by moving them towards the lottery which puts weight l(r) to the r-th ranked candidate of the last poll, where l(1) >= l(2) >= ... >= l(k) > 0 for some k>1, and l(r)=0 for all r>k. For example, if k=3, l(1)=1/2, and l(2)=l(3)=1/4, then this means that after each poll, voters move their priors "a half step" towards the winner and "a quarter step" each towards the second and third placed candidates. Then there is a situation in which the process cannot converge: it consists of k+1 candidates 0,...,k and k+1 voters 0,...,k, with a cyclic shape of utilities as follows: voter 0 assigns utility 0 to candidate 0 and utility 1-l(k)^x to all other candidates x>0. Analogously, voter i assigns utility 0 to candidate i, utility 1-l(k)^(x-i) to all candidates x>i, and utility 1-l(k)^(x+k+1-i) to all candidates x<i. Now assume the process would converge, so that eventually the approval ranking is stable, and let without loss of generality candidate 0 be the last ranked candidate in that limit ranking. Then eventually all priors are arbitrarily close to the vector (l(1),l(2),...,l(k),0). But when compute the resulting cutoffs from these limit priors, we see that voter 0 has a cutoff larger than 0 and less than 1-l(k)^2, and all other voters have cutoffs larger than 0 and less than 1-l(k). Consequently, voter 0 approves of candidates 2,3,...,k, and each other voter i approves of all candidates except i. But this means that candidate 0 gets approval k whereas candidate 1 only gets approval k-1, in contradiction to the fact that candidate 1 was ranked before candidate 0. QED. I am not sure yet whether a similar counterexample applies to Weinstein's strategy, too, but I fear this will be the case. So it seems that in order to guarantee convergence, voters indeed need to focus on the polls winners only, or at least let their adjustment weights for later candidates converge to zero over time. This is a pity since I hoped that a good strategy would be to place equal weights on the first two candidates, since then the priors would probably be better approximations to the probability of being in a two-way tie as required for the justification of 0-info strategy... Jobst Simmons, Forest wrote: > Jobst gave examples in which optimal approval strategy for someone with > preferences A>B>C>D would be to approve only A and C. > > Mike Ossipoff and Richard Moore first made me aware of these > counterintuitive possibilities. But I still believe that one would have > to have impossibly precise and reliable probability and utility > estimates before one would gain a significant advantage by skipping over > B. > > Usually the probabilities and the utilities are crude subjective > estimates. However, in the case of repeated balloting, the > probabilities get refined (if we can figure out how to do it!), and for > the sake of idealization we can assume that the utilities are precise, > too. > > If we can find a repeated balloting method that take these > refinements into account, then great, but if it is only tractable to > forget the tie probabilites and use the simpler "cutoff" style > strategies, then no big loss. > > Forest ---- Election-methods mailing list - see http://electorama.com/em for list info