Matthew Toseland wrote:
> Not in terms of predecessor samples afaics. The only new information we get 
> is 
> how many requests have survived this far. And beyond a certain distance that 
> presumably just complicates the underlying signal (proportion of requests 
> likely to come to me if X is the originator) ???

Good point!

I guess the only remaining issue is branching - when we get a DNF/RNF 
from a peer, do we (1) always return a DNF/RNF without trying any more 
peers, or (2) toss the coin again before trying each peer?

It seems like (1) might be prone to getting stuck in rabbit holes, but 
(2) might visit too many nodes. In fact I'm sure it will - once we send 
a request to a peer it doesn't (and shouldn't) know whether it's the 
first node ever to see that request. So the expected size of each branch 
is equal to the expected size of the entire tree. I think that means the 
expected size of the tree is infinite, which would make sense - the 
depth of the tree is exponentially distributed, so the size of the tree 
is Pareto distributed, and Pareto distributions can have finite or 
infinite means depending on the parameters. So can we adjust the 
parameters (depth and/or branching factor) to get a finite mean?

The size of the tree will be finite if each node has less than one child 
on average, excluding loops. So if we try each peer with decreasing 
probability, in such a way that the average number of peers tried 
(excluding loops) is less than 1, then the number of nodes visited 
should be finite.

For example, we could try the first peer with probability 1-pDrop, the 
second with probability pDrop*(1-pDrop), the third with probability 
pDrop^2*(1-pDrop), etc, unless we get a RejectedLoop in which case we 
move on without changing the probability.

Cheers,
Michael

Reply via email to