On Jan 30 2008, Matthew Toseland wrote:
>So I propose to implement weighted-coin-followed-by-HTL. This should cause 
>very little disruption, as we won't have very short requests, in fact the 
>code changes would be very minor.

Sounds good - your arguments against parallel/repeated inserts are 
convincing. What about automatic re-requests - if they're triggered by a 
timeout or a DNF, could an attacker measure the fraction of re-requests 
that reach his node and work out the likely distance to the originator? 
Would randomising the re-request interval help?

>We can always change it later on to for 
>example pure weighted coin (although I'm not convinced that will work well 
>for inserts).

Me neither - although we could use a lower pDrop for inserts if many of 
them are failing to reach their targets.

Alternatively, we could set pDrop very low (say 1%) but toss the coin 
*before* trying each peer, which would count loops and overloads. If the 
topology's any good, most searches will get close to the target and then 
bounce around getting RejectedLoops until the coin comes up tails, which is 
fairly cheap. But if the topology's broken and there are a lot of RNFs, 
we'll have a higher chance of escaping dead ends.

Cheers,
Michael

Reply via email to