On Sun, Jun 17, 2001 at 01:34:58PM +0200, Oskar Sandberg wrote:
> There is no guesswork involved.
I was referring to:
> The
> second is the values we are using for the Expected value and variance,
> which are based on some pretty rough experiments from last spring.
You say estimates, I say guesswork. It is very likely that the
measurements now would be radically different from measurements taken
months ago.
> > There is no evidence that a HTL of 100 is unreasonable, but if such a
> > HTL would lead to nodes waiting around for ages, then the problem is the
> > arbitrary calculation, not nescessarily that the HTL is too large.
>
> The reason we would have to fucking wait a long time to restart when HTL
> is 100, is because it takes a fucking long time to make 100 hops even if
> everything goes right. To say that is because of how we calculate it is
> silly beyond words.
Not true. Ask anyone who requests at a HTL of 100, you will find that
most requests return in a reasonable amount of time, even failed
requests.
> However, that would be a bitch to implement, add lots
> of load, and be an invaluable gift to anyone trying to use traffic
> analysis on the network - which is not something I am willing to accept
> for what I see as a half-assed workaround for the fact the routing is
> currently not working.
If routing wasn't working, then nobody would be able to retrieve
anything, and our simulations would have demonstrated this. The reality
is that multiple independent simulations have demonstrated that routing
does work. It is probably that overzealous caching is currently
degrading document longevity, and we will address this. But in the mean
time, people want to know how they can improve performace *now*, and in
everybody's experience, it seems that increasing the HTL does this. It
may be a kludge, and it hopefully won't be nescessary on 0.4, but it
does appear beneficial now. Do you advocate trying to hide this fact
from people?
Ian.
PGP signature